Visual and semantic processing of living things and artifacts: an FMRI study.
Zannino, Gian Daniele; Buccione, Ivana; Perri, Roberta; Macaluso, Emiliano; Lo Gerfo, Emanuele; Caltagirone, Carlo; Carlesimo, Giovanni A
2010-03-01
We carried out an fMRI study with a twofold purpose: to investigate the relationship between networks dedicated to semantic and visual processing and to address the issue of whether semantic memory is subserved by a unique network or by different subsystems, according to semantic category or feature type. To achieve our goals, we administered a word-picture matching task, with within-category foils, to 15 healthy subjects during scanning. Semantic distance between the target and the foil and semantic domain of the target-foil pairs were varied orthogonally. Our results suggest that an amodal, undifferentiated network for the semantic processing of living things and artifacts is located in the anterolateral aspects of the temporal lobes; in fact, activity in this substrate was driven by semantic distance, not by semantic category. By contrast, activity in ventral occipito-temporal cortex was driven by category, not by semantic distance. We interpret the latter finding as the effect exerted by systematic differences between living things and artifacts at the level of their structural representations and possibly of their lower-level visual features. Finally, we attempt to reconcile contrasting data in the neuropsychological and functional imaging literature on semantic substrate and category specificity.
Reilly, Jamie; Rodriguez, Amy D; Peelle, Jonathan E; Grossman, Murray
2011-06-01
Portions of left inferior frontal cortex have been linked to semantic memory both in terms of the content of conceptual representation (e.g., motor aspects in an embodied semantics framework) and the cognitive processes used to access these representations (e.g., response selection). Progressive non-fluent aphasia (PNFA) is a neurodegenerative condition characterized by progressive atrophy of left inferior frontal cortex. PNFA can, therefore, provide a lesion model for examining the impact of frontal lobe damage on semantic processing and content. In the current study we examined picture naming in a cohort of PNFA patients across a variety of semantic categories. An embodied approach to semantic memory holds that sensorimotor features such as self-initiated action may assume differential importance for the representation of manufactured artifacts (e.g., naming hand tools). Embodiment theories might therefore predict that patients with frontal damage would be differentially impaired on manufactured artifacts relative to natural kinds, and this prediction was borne out. We also examined patterns of naming errors across a wide range of semantic categories and found that naming error distributions were heterogeneous. Although PNFA patients performed worse overall on naming manufactured artifacts, there was no reliable relationship between anomia and manipulability across semantic categories. These results add to a growing body of research arguing against a purely sensorimotor account of semantic memory, suggesting instead a more nuanced balance of process and content in how the brain represents conceptual knowledge. Copyright © 2010 Elsevier Srl. All rights reserved.
Knowledge of the human body: a distinct semantic domain.
Coslett, H Branch; Saffran, Eleanor M; Schwoebel, John
2002-08-13
Patients with selective deficits in the naming and comprehension of animals, plants, and artifacts have been reported. These descriptions of specific semantic category deficits have contributed substantially to the understanding of the architecture of semantic representations. This study sought to further understanding of the organization of the semantic system by demonstrating that another semantic category, knowledge of the human body, may be selectively preserved. The performance of a patient with semantic dementia was compared with the performance of healthy controls on a variety of tasks assessing distinct types of body representations, including the body schema, body image, and body structural description. Despite substantial deficits on tasks involving language and knowledge of the world generally, the patient performed normally on all tests of body knowledge except body part naming; even in this naming task, however, her performance with body parts was significantly better than on artifacts. The demonstration that body knowledge may be preserved despite substantial semantic deficits involving other types of semantic information argues that body knowledge is a distinct and dissociable semantic category. These data are interpreted as support for a model of semantics that proposes that knowledge is distributed across different cortical regions reflecting the manner in which the information was acquired.
Semantic Document Library: A Virtual Research Environment for Documents, Data and Workflows Sharing
NASA Astrophysics Data System (ADS)
Kotwani, K.; Liu, Y.; Myers, J.; Futrelle, J.
2008-12-01
The Semantic Document Library (SDL) was driven by use cases from the environmental observatory communities and is designed to provide conventional document repository features of uploading, downloading, editing and versioning of documents as well as value adding features of tagging, querying, sharing, annotating, ranking, provenance, social networking and geo-spatial mapping services. It allows users to organize a catalogue of watershed observation data, model output, workflows, as well publications and documents related to the same watershed study through the tagging capability. Users can tag all relevant materials using the same watershed name and find all of them easily later using this tag. The underpinning semantic content repository can store materials from other cyberenvironments such as workflow or simulation tools and SDL provides an effective interface to query and organize materials from various sources. Advanced features of the SDL allow users to visualize the provenance of the materials such as the source and how the output data is derived. Other novel features include visualizing all geo-referenced materials on a geospatial map. SDL as a component of a cyberenvironment portal (the NCSA Cybercollaboratory) has goal of efficient management of information and relationships between published artifacts (Validated models, vetted data, workflows, annotations, best practices, reviews and papers) produced from raw research artifacts (data, notes, plans etc.) through agents (people, sensors etc.). Tremendous scientific potential of artifacts is achieved through mechanisms of sharing, reuse and collaboration - empowering scientists to spread their knowledge and protocols and to benefit from the knowledge of others. SDL successfully implements web 2.0 technologies and design patterns along with semantic content management approach that enables use of multiple ontologies and dynamic evolution (e.g. folksonomies) of terminology. Scientific documents involved with many interconnected entities (artifacts or agents) are represented as RDF triples using semantic content repository middleware Tupelo in one or many data/metadata RDF stores. Queries to the RDF enables discovery of relations among data, process and people, digging out valuable aspects, making recommendations to users, such as what tools are typically used to answer certain kinds of questions or with certain types of dataset. This innovative concept brings out coherent information about entities from four different perspectives of the social context (Who-human relations and interactions), the casual context (Why - provenance and history), the geo-spatial context (Where - location or spatially referenced information) and the conceptual context (What - domain specific relations, ontologies etc.).
How semantic category modulates preschool children's visual memory.
Giganti, Fiorenza; Viggiano, Maria Pia
2015-01-01
The dynamic interplay between perception and memory has been explored in preschool children by presenting filtered stimuli regarding animals and artifacts. The identification of filtered images was markedly influenced by both prior exposure and the semantic nature of the stimuli. The identification of animals required less physical information than artifacts did. Our results corroborate the notion that the human attention system evolves to reliably develop definite category-specific selection criteria by which living entities are monitored in different ways.
Knowledge of Natural Kinds in Semantic Dementia and Alzheimer's Disease
ERIC Educational Resources Information Center
Cross, Katy; Smith, Edward E.; Grossman, Murray
2008-01-01
We examined the semantic impairment for natural kinds in patients with probable Alzheimer's disease (AD) and semantic dementia (SD) using an inductive reasoning paradigm. To learn about the relationships between natural kind exemplars and how these are distinguished from manufactured artifacts, subjects judged the strength of arguments such as…
Neurology of anomia in the semantic variant of primary progressive aphasia
Rogalski, Emily; Wieneke, Christina; Cobia, Derin; Rademaker, Alfred; Thompson, Cynthia; Weintraub, Sandra
2009-01-01
The semantic variant of primary progressive aphasia (PPA) is characterized by the combination of word comprehension deficits, fluent aphasia and a particularly severe anomia. In this study, two novel tasks were used to explore the factors contributing to the anomia. The single most common factor was a blurring of distinctions among members of a semantic category, leading to errors of overgeneralization in word–object matching tasks as well as in word definitions and object descriptions. This factor was more pronounced for natural kinds than artifacts. In patients with the more severe anomias, conceptual maps were more extensively disrupted so that inter-category distinctions were as impaired as intra-category distinctions. Many objects that could not be named aloud could be matched to the correct word in patients with mild but not severe anomia, reflecting a gradual intensification of the semantic factor as the naming disorder becomes more severe. Accurate object descriptions were more frequent than accurate word definitions and all patients experienced prominent word comprehension deficits that interfered with everyday activities but no consequential impairment of object usage or face recognition. Magnetic resonance imaging revealed three characteristics: greater atrophy of the left hemisphere; atrophy of anterior components of the perisylvian language network in the superior and middle temporal gyri; and atrophy of anterior components of the face and object recognition network in the inferior and medial temporal lobes. The left sided asymmetry and perisylvian extension of the atrophy explains the more profound impairment of word than object usage and provides the anatomical basis for distinguishing the semantic variant of primary progressive aphasia from the partially overlapping group of patients that fulfil the widely accepted diagnostic criteria for semantic dementia. PMID:19506067
Neurology of anomia in the semantic variant of primary progressive aphasia.
Mesulam, Marsel; Rogalski, Emily; Wieneke, Christina; Cobia, Derin; Rademaker, Alfred; Thompson, Cynthia; Weintraub, Sandra
2009-09-01
The semantic variant of primary progressive aphasia (PPA) is characterized by the combination of word comprehension deficits, fluent aphasia and a particularly severe anomia. In this study, two novel tasks were used to explore the factors contributing to the anomia. The single most common factor was a blurring of distinctions among members of a semantic category, leading to errors of overgeneralization in word-object matching tasks as well as in word definitions and object descriptions. This factor was more pronounced for natural kinds than artifacts. In patients with the more severe anomias, conceptual maps were more extensively disrupted so that inter-category distinctions were as impaired as intra-category distinctions. Many objects that could not be named aloud could be matched to the correct word in patients with mild but not severe anomia, reflecting a gradual intensification of the semantic factor as the naming disorder becomes more severe. Accurate object descriptions were more frequent than accurate word definitions and all patients experienced prominent word comprehension deficits that interfered with everyday activities but no consequential impairment of object usage or face recognition. Magnetic resonance imaging revealed three characteristics: greater atrophy of the left hemisphere; atrophy of anterior components of the perisylvian language network in the superior and middle temporal gyri; and atrophy of anterior components of the face and object recognition network in the inferior and medial temporal lobes. The left sided asymmetry and perisylvian extension of the atrophy explains the more profound impairment of word than object usage and provides the anatomical basis for distinguishing the semantic variant of primary progressive aphasia from the partially overlapping group of patients that fulfil the widely accepted diagnostic criteria for semantic dementia.
ER2OWL: Generating OWL Ontology from ER Diagram
NASA Astrophysics Data System (ADS)
Fahad, Muhammad
Ontology is the fundamental part of Semantic Web. The goal of W3C is to bring the web into (its full potential) a semantic web with reusing previous systems and artifacts. Most legacy systems have been documented in structural analysis and structured design (SASD), especially in simple or Extended ER Diagram (ERD). Such systems need up-gradation to become the part of semantic web. In this paper, we present ERD to OWL-DL ontology transformation rules at concrete level. These rules facilitate an easy and understandable transformation from ERD to OWL. The set of rules for transformation is tested on a structured analysis and design example. The framework provides OWL ontology for semantic web fundamental. This framework helps software engineers in upgrading the structured analysis and design artifact ERD, to components of semantic web. Moreover our transformation tool, ER2OWL, reduces the cost and time for building OWL ontologies with the reuse of existing entity relationship models.
Linear separability in superordinate natural language concepts.
Ruts, Wim; Storms, Gert; Hampton, James
2004-01-01
Two experiments are reported in which linear separability was investigated in superordinate natural language concept pairs (e.g., toiletry-sewing gear). Representations of the exemplars of semantically related concept pairs were derived in two to five dimensions using multidimensional scaling (MDS) of similarities based on possession of the concept features. Next, category membership, obtained from an exemplar generation study (in Experiment 1) and from a forced-choice classification task (in Experiment 2) was predicted from the coordinates of the MDS representation using log linear analysis. The results showed that all natural kind concept pairs were perfectly linearly separable, whereas artifact concept pairs showed several violations. Clear linear separability of natural language concept pairs is in line with independent cue models. The violations in the artifact pairs, however, yield clear evidence against the independent cue models.
Schweizer, Tom A; Dixon, Mike J; Desmarais, Geneviève; Smith, Stephen D
2002-01-01
Identification deficits were investigated in ELM, a temporal lobe stroke patient with category-specific deficits. We replicated previous work done on FS, a patient with category specific deficits as a result of herpes viral encephalitis. ELM was tested using novel, computer generated shapes that were paired with artifact labels. We paired semantically close or disparate labels to shapes and ELM attempted to learn these pairings. Overall, ELM's shape-label confusions were most detrimentally affected when we used labels that referred to objects that were visually and semantically close. However, as with FS, ELM had as many errors when shapes were paired with the labels "donut," "tire," and "washer" as he did when they were paired with visually and semantically close artifact labels. Two explanations are put forth to account for the anomalous performance by both patients on the triad of donut-tire-washer.
Kiser, Patti K; Löhr, Christiane V; Meritet, Danielle; Spagnoli, Sean T; Milovancev, Milan; Russell, Duncan S
2018-05-01
Although quantitative assessment of margins is recommended for describing excision of cutaneous malignancies, there is poor understanding of limitations associated with this technique. We described and quantified histologic artifacts in inked margins and determined the association between artifacts and variance in histologic tumor-free margin (HTFM) measurements based on a novel grading scheme applied to 50 sections of normal canine skin and 56 radial margins taken from 15 different canine mast cell tumors (MCTs). Three broad categories of artifact were 1) tissue deformation at inked edges, 2) ink-associated artifacts, and 3) sectioning-associated artifacts. The most common artifacts in MCT margins were ink-associated artifacts, specifically ink absent from an edge (mean prevalence: 50%) and inappropriate ink coloring (mean: 45%). The prevalence of other artifacts in MCT skin was 4-50%. In MCT margins, frequency-adjusted kappa statistics found fair or better inter-rater reliability for 9 of 10 artifacts; intra-rater reliability was moderate or better in 9 of 10 artifacts. Digital HTFM measurements by 5 blinded pathologists had a median standard deviation (SD) of 1.9 mm (interquartile range: 0.8-3.6 mm; range: 0-6.2 mm). Intraclass correlation coefficients demonstrated good inter-pathologist reliability in HTFM measurement (κ = 0.81). Spearman rank correlation coefficients found negligible correlation between artifacts and HTFM SDs ( r ≤ 0.3). These data confirm that although histologic artifacts commonly occur in inked margin specimens, artifacts are not meaningfully associated with variation in HTFM measurements. Investigators can use the grading scheme presented herein to identify artifacts associated with tissue processing.
SU-C-304-05: Use of Local Noise Power Spectrum and Wavelets in Comprehensive EPID Quality Assurance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, S; Gopal, A; Yan, G
2015-06-15
Purpose: As EPIDs are increasingly used for IMRT QA and real-time treatment verification, comprehensive quality assurance (QA) of EPIDs becomes critical. Current QA with phantoms such as the Las Vegas and PIPSpro™ can fail in the early detection of EPID artifacts. Beyond image quality assessment, we propose a quantitative methodology using local noise power spectrum (NPS) to characterize image noise and wavelet transform to identify bad pixels and inter-subpanel flat-fielding artifacts. Methods: A total of 93 image sets including bar-pattern images and open exposure images were collected from four iViewGT a-Si EPID systems over three years. Quantitative metrics such asmore » modulation transform function (MTF), NPS and detective quantum efficiency (DQE) were computed for each image set. Local 2D NPS was calculated for each subpanel. A 1D NPS was obtained by radial averaging the 2D NPS and fitted to a power-law function. R-square and slope of the linear regression analysis were used for panel performance assessment. Haar wavelet transformation was employed to identify pixel defects and non-uniform gain correction across subpanels. Results: Overall image quality was assessed with DQE based on empirically derived area under curve (AUC) thresholds. Using linear regression analysis of 1D NPS, panels with acceptable flat fielding were indicated by r-square between 0.8 and 1, and slopes of −0.4 to −0.7. However, for panels requiring flat fielding recalibration, r-square values less than 0.8 and slopes from +0.2 to −0.4 were observed. The wavelet transform successfully identified pixel defects and inter-subpanel flat fielding artifacts. Standard QA with the Las Vegas and PIPSpro phantoms failed to detect these artifacts. Conclusion: The proposed QA methodology is promising for the early detection of imaging and dosimetric artifacts of EPIDs. Local NPS can accurately characterize the noise level within each subpanel, while the wavelet transforms can detect bad pixels and inter-subpanel flat fielding artifacts.« less
Kalénine, Solène; Buxbaum, Laurel J.
2016-01-01
Converging evidence supports the existence of functionally and neuroanatomically distinct taxonomic (similarity-based; e.g., hammer-screwdriver) and thematic (event-based; e.g., hammer-nail) semantic systems. Processing of thematic relations between objects has been shown to selectively recruit the left posterior temporoparietal cortex. Similar posterior regions have been also been shown to be critical for knowledge of relationships between actions and manipulable human-made objects (artifacts). Based on the hypothesis that thematic relationships for artifacts are based, at least in part, on action relationships, we assessed the prediction that the same regions of the left posterior temporoparietal cortex would be critical for conceptual processing of artifact-related actions and thematic relations for artifacts. To test this hypothesis, we evaluated processing of taxonomic and thematic relations for artifact and natural objects as well as artifact action knowledge (gesture recognition) abilities in a large sample of 48 stroke patients with a range of lesion foci in the left hemisphere. Like control participants, patients identified thematic relations faster than taxonomic relations for artifacts, whereas they identified taxonomic relations faster than thematic relations for natural objects. Moreover, response times for identifying thematic relations for artifacts selectively predicted performance in gesture recognition. Whole brain Voxel Based Lesion-Symptom Mapping (VLSM) analyses and Region of Interest (ROI) regression analyses further demonstrated that lesions to the left posterior temporal cortex, overlapping with LTO and visual motion area hMT+, were associated both with relatively slower response times in identifying thematic relations for artifacts and poorer artifact action knowledge in patients. These findings provide novel insights into the functional role of left posterior temporal cortex in thematic knowledge, and suggest that the close association between thematic relations for artifacts and action representations may reflect their common dependence on visual motion and manipulation information. PMID:27389801
ERIC Educational Resources Information Center
Tupak, Sara V.; Badewien, Meike; Dresler, Thomas; Hahn, Tim; Ernst, Lena H.; Herrmann, Martin J.; Fallgatter, Andreas J.; Ehlis, Ann-Christine
2012-01-01
Movement artifacts are still considered a problematic issue for imaging research on overt language production. This motion-sensitivity can be overcome by functional near-infrared spectroscopy (fNIRS). In the present study, 50 healthy subjects performed a combined phonemic and semantic overt verbal fluency task while frontal and temporal cortex…
Component Models for Semantic Web Languages
NASA Astrophysics Data System (ADS)
Henriksson, Jakob; Aßmann, Uwe
Intelligent applications and agents on the Semantic Web typically need to be specified with, or interact with specifications written in, many different kinds of formal languages. Such languages include ontology languages, data and metadata query languages, as well as transformation languages. As learnt from years of experience in development of complex software systems, languages need to support some form of component-based development. Components enable higher software quality, better understanding and reusability of already developed artifacts. Any component approach contains an underlying component model, a description detailing what valid components are and how components can interact. With the multitude of languages developed for the Semantic Web, what are their underlying component models? Do we need to develop one for each language, or is a more general and reusable approach achievable? We present a language-driven component model specification approach. This means that a component model can be (automatically) generated from a given base language (actually, its specification, e.g. its grammar). As a consequence, we can provide components for different languages and simplify the development of software artifacts used on the Semantic Web.
Desai, Rutvik H.; Graves, William W.; Conant, Lisa L.
2009-01-01
Semantic memory refers to knowledge about people, objects, actions, relations, self, and culture acquired through experience. The neural systems that store and retrieve this information have been studied for many years, but a consensus regarding their identity has not been reached. Using strict inclusion criteria, we analyzed 120 functional neuroimaging studies focusing on semantic processing. Reliable areas of activation in these studies were identified using the activation likelihood estimate (ALE) technique. These activations formed a distinct, left-lateralized network comprised of 7 regions: posterior inferior parietal lobe, middle temporal gyrus, fusiform and parahippocampal gyri, dorsomedial prefrontal cortex, inferior frontal gyrus, ventromedial prefrontal cortex, and posterior cingulate gyrus. Secondary analyses showed specific subregions of this network associated with knowledge of actions, manipulable artifacts, abstract concepts, and concrete concepts. The cortical regions involved in semantic processing can be grouped into 3 broad categories: posterior multimodal and heteromodal association cortex, heteromodal prefrontal cortex, and medial limbic regions. The expansion of these regions in the human relative to the nonhuman primate brain may explain uniquely human capacities to use language productively, plan, solve problems, and create cultural and technological artifacts, all of which depend on the fluid and efficient retrieval and manipulation of semantic knowledge. PMID:19329570
A logical approach to semantic interoperability in healthcare.
Bird, Linda; Brooks, Colleen; Cheong, Yu Chye; Tun, Nwe Ni
2011-01-01
Singapore is in the process of rolling out a number of national e-health initiatives, including the National Electronic Health Record (NEHR). A critical enabler in the journey towards semantic interoperability is a Logical Information Model (LIM) that harmonises the semantics of the information structure with the terminology. The Singapore LIM uses a combination of international standards, including ISO 13606-1 (a reference model for electronic health record communication), ISO 21090 (healthcare datatypes), and SNOMED CT (healthcare terminology). The LIM is accompanied by a logical design approach, used to generate interoperability artifacts, and incorporates mechanisms for achieving unidirectional and bidirectional semantic interoperability.
Gainotti, Guido; Ciaraffa, Francesca; Silveri, Maria Caterina; Marra, Camillo
2009-11-01
According to the "sensory-motor model of semantic knowledge," different categories of knowledge differ for the weight that different "sources of knowledge" have in their representation. Our study aimed to evaluate this model, checking if subjective evaluations given by normal subjects confirm the different weight that various sources of knowledge have in the representation of different biological and artifact categories and of unique entities, such as famous people or monuments. Results showed that the visual properties are considered as the main source of knowledge for all the living and nonliving categories (as well as for unique entities), but that the clustering of these "sources of knowledge" is different for biological and artifacts categories. Visual data are, indeed, mainly associated with other perceptual (auditory, olfactory, gustatory, and tactual) attributes in the mental representation of living beings and unique entities, whereas they are associated with action-related properties and tactile information in the case of artifacts.
ERIC Educational Resources Information Center
Pirnay-Dummer, Pablo
2015-01-01
A local semantic trace is a certain quasi-propositional structure that can still be reconstructed from written content that is incomplete or does not follow a proper grammar. It can also retrace bits of knowledge from text containing only very few words, making the microstructure of these artifacts of knowledge externalization available for…
Semi-automated ontology generation and evolution
NASA Astrophysics Data System (ADS)
Stirtzinger, Anthony P.; Anken, Craig S.
2009-05-01
Extending the notion of data models or object models, ontology can provide rich semantic definition not only to the meta-data but also to the instance data of domain knowledge, making these semantic definitions available in machine readable form. However, the generation of an effective ontology is a difficult task involving considerable labor and skill. This paper discusses an Ontology Generation and Evolution Processor (OGEP) aimed at automating this process, only requesting user input when un-resolvable ambiguous situations occur. OGEP directly attacks the main barrier which prevents automated (or self learning) ontology generation: the ability to understand the meaning of artifacts and the relationships the artifacts have to the domain space. OGEP leverages existing lexical to ontological mappings in the form of WordNet, and Suggested Upper Merged Ontology (SUMO) integrated with a semantic pattern-based structure referred to as the Semantic Grounding Mechanism (SGM) and implemented as a Corpus Reasoner. The OGEP processing is initiated by a Corpus Parser performing a lexical analysis of the corpus, reading in a document (or corpus) and preparing it for processing by annotating words and phrases. After the Corpus Parser is done, the Corpus Reasoner uses the parts of speech output to determine the semantic meaning of a word or phrase. The Corpus Reasoner is the crux of the OGEP system, analyzing, extrapolating, and evolving data from free text into cohesive semantic relationships. The Semantic Grounding Mechanism provides a basis for identifying and mapping semantic relationships. By blending together the WordNet lexicon and SUMO ontological layout, the SGM is given breadth and depth in its ability to extrapolate semantic relationships between domain entities. The combination of all these components results in an innovative approach to user assisted semantic-based ontology generation. This paper will describe the OGEP technology in the context of the architectural components referenced above and identify a potential technology transition path to Scott AFB's Tanker Airlift Control Center (TACC) which serves as the Air Operations Center (AOC) for the Air Mobility Command (AMC).
Hoyau, E; Cousin, E; Jaillard, A; Baciu, M
2016-12-01
We evaluated the effect of normal aging on the inter-hemispheric processing of semantic information by using the divided visual field (DVF) method, with words and pictures. Two main theoretical models have been considered, (a) the HAROLD model which posits that aging is associated with supplementary recruitment of the right hemisphere (RH) and decreased hemispheric specialization, and (b) the RH decline theory, which assumes that the RH becomes less efficient with aging, associated with increased LH specialization. Two groups of subjects were examined, a Young Group (YG) and an Old Group (OG), while participants performed a semantic categorization task (living vs. non-living) in words and pictures. The DVF was realized in two steps: (a) unilateral DVF presentation with stimuli presented separately in each visual field, left or right, allowing for their initial processing by only one hemisphere, right or left, respectively; (b) bilateral DVF presentation (BVF) with stimuli presented simultaneously in both visual fields, followed by their processing by both hemispheres. These two types of presentation permitted the evaluation of two main characteristics of the inter-hemispheric processing of information, the hemispheric specialization (HS) and the inter-hemispheric cooperation (IHC). Moreover, the BVF allowed determining the driver-hemisphere for processing information presented in BVF. Results obtained in OG indicated that: (a) semantic categorization was performed as accurately as YG, even if more slowly, (b) a non-semantic RH decline was observed, and (c) the LH controls the semantic processing during the BVF, suggesting an increased role of the LH in aging. However, despite the stronger involvement of the LH in OG, the RH is not completely devoid of semantic abilities. As discussed in the paper, neither the HAROLD nor the RH decline does fully explain this pattern of results. We rather suggest that the effect of aging on the hemispheric specialization and inter-hemispheric cooperation during semantic processing is explained not by only one model, but by an interaction between several complementary mechanisms and models. Copyright © 2015 Elsevier Ltd. All rights reserved.
Individual Variation in the Late Positive Complex to Semantic Anomalies
Kos, Miriam; van den Brink, Danielle; Hagoort, Peter
2012-01-01
It is well-known that, within ERP paradigms of sentence processing, semantically anomalous words elicit N400 effects. Less clear, however, is what happens after the N400. In some cases N400 effects are followed by Late Positive Complexes (LPC), whereas in other cases such effects are lacking. We investigated several factors which could affect the LPC, such as contextual constraint, inter-individual variation, and working memory. Seventy-two participants read sentences containing a semantic manipulation (Whipped cream tastes sweet/anxious and creamy). Neither contextual constraint nor working memory correlated with the LPC. Inter-individual variation played a substantial role in the elicitation of the LPC with about half of the participants showing a negative response and the other half showing an LPC. This individual variation correlated with a syntactic ERP as well as an alternative semantic manipulation. In conclusion, our results show that inter-individual variation plays a large role in the elicitation of the LPC and this may account for the diversity in LPC findings in language research. PMID:22973249
Double dissociation of semantic categories in Alzheimer's disease.
Gonnerman, L M; Andersen, E S; Devlin, J T; Kempler, D; Seidenberg, M S
1997-04-01
Data that demonstrate distinct patterns of semantic impairment in Alzheimer's disease (AD) are presented. Findings suggest that while groups of mild-moderate patients may not display category specific impairments, some individual patients do show selective impairment of either natural kinds or artifacts. We present a model of semantic organization in which category specific impairments arise from damage to distributed features underlying different types of categories. We incorporate the crucial notions of intercorrelations and distinguishing features, allowing us to demonstrate (1) how category specific impairments can result from widespread damage and (2) how selective deficits in AD reflect different points in the progression of impairment. The different patterns of impairment arise from an interaction between the nature of the semantic categories and the progression of damage.
Representing annotation compositionality and provenance for the Semantic Web
2013-01-01
Background Though the annotation of digital artifacts with metadata has a long history, the bulk of that work focuses on the association of single terms or concepts to single targets. As annotation efforts expand to capture more complex information, annotations will need to be able to refer to knowledge structures formally defined in terms of more atomic knowledge structures. Existing provenance efforts in the Semantic Web domain primarily focus on tracking provenance at the level of whole triples and do not provide enough detail to track how individual triple elements of annotations were derived from triple elements of other annotations. Results We present a task- and domain-independent ontological model for capturing annotations and their linkage to their denoted knowledge representations, which can be singular concepts or more complex sets of assertions. We have implemented this model as an extension of the Information Artifact Ontology in OWL and made it freely available, and we show how it can be integrated with several prominent annotation and provenance models. We present several application areas for the model, ranging from linguistic annotation of text to the annotation of disease-associations in genome sequences. Conclusions With this model, progressively more complex annotations can be composed from other annotations, and the provenance of compositional annotations can be represented at the annotation level or at the level of individual elements of the RDF triples composing the annotations. This in turn allows for progressively richer annotations to be constructed from previous annotation efforts, the precise provenance recording of which facilitates evidence-based inference and error tracking. PMID:24268021
Pluciennicka, Ewa; Wamain, Yannick; Coello, Yann; Kalénine, Solène
2016-07-01
The aim of this study was to specify the role of action representations in thematic and functional similarity relations between manipulable artifact objects. Recent behavioral and neurophysiological evidence indicates that while they are all relevant for manipulable artifact concepts, semantic relations based on thematic (e.g., saw-wood), specific function similarity (e.g., saw-axe), and general function similarity (e.g., saw-knife) are differently processed, and may relate to different levels of action representation. Point-light displays of object-related actions previously encoded at the gesture level (e.g., "sawing") or at the higher level of action representation (e.g., "cutting") were used as primes before participants identified target objects (e.g., saw) among semantically related and unrelated distractors (e.g., wood, feather, piano). Analysis of eye movements on the different objects during target identification informed about the amplitude and the timing of implicit activation of the different semantic relations. Results showed that action prime encoding impacted the processing of thematic relations, but not that of functional similarity relations. Semantic competition with thematic distractors was greater and earlier following action primes encoded at the gesture level compared to action primes encoded at higher level. As a whole, these findings highlight the direct influence of action representations on thematic relation processing, and suggest that thematic relations involve gesture-level representations rather than intention-level representations.
Distinct neural substrates for semantic knowledge and naming in the temporoparietal network.
Gesierich, Benno; Jovicich, Jorge; Riello, Marianna; Adriani, Michela; Monti, Alessia; Brentari, Valentina; Robinson, Simon D; Wilson, Stephen M; Fairhall, Scott L; Gorno-Tempini, Maria Luisa
2012-10-01
Patients with anterior temporal lobe (ATL) lesions show semantic and lexical retrieval deficits, and the differential role of this area in the 2 processes is debated. Functional neuroimaging in healthy individuals has not clarified the matter because semantic and lexical processes usually occur simultaneously and automatically. Furthermore, the ATL is a region challenging for functional magnetic resonance imaging (fMRI) due to susceptibility artifacts, especially at high fields. In this study, we established an optimized ATL-sensitive fMRI acquisition protocol at 4 T and applied an event-related paradigm to study the identification (i.e., association of semantic biographical information) of celebrities, with and without the ability to retrieve their proper names. While semantic processing reliably activated the ATL, only more posterior areas in the left temporal and temporal-parietal junction were significantly modulated by covert lexical retrieval. These results suggest that within a temporoparietal network, the ATL is relatively more important for semantic processing, and posterior language regions are relatively more important for lexical retrieval.
Hantsch, Ansgar; Jescheniak, Jörg D; Mädebach, Andreas
2012-07-01
The picture-word interference paradigm is a prominent tool for studying lexical retrieval during speech production. When participants name the pictures, interference from semantically related distractor words has regularly been shown. By contrast, when participants categorize the pictures, facilitation from semantically related distractors has typically been found. In the extant studies, however, differences in the task instructions (naming vs. categorizing) were confounded with the response level: While responses in naming were typically located at the basic level (e.g., "dog"), responses were located at the superordinate level in categorization (e.g., "animal"). The present study avoided this confound by having participants respond at the basic level in both naming and categorization, using the same pictures, distractors, and verbal responses. Our findings confirm the polarity reversal of the semantic effects--that is, semantic interference in naming, and semantic facilitation in categorization. These findings show that the polarity reversal of the semantic effect is indeed due to the different tasks and is not an artifact of the different response levels used in previous studies. Implications for current models of language production are discussed.
Olsher, Daniel
2014-10-01
Noise-resistant and nuanced, COGBASE makes 10 million pieces of commonsense data and a host of novel reasoning algorithms available via a family of semantically-driven prior probability distributions. Machine learning, Big Data, natural language understanding/processing, and social AI can draw on COGBASE to determine lexical semantics, infer goals and interests, simulate emotion and affect, calculate document gists and topic models, and link commonsense knowledge to domain models and social, spatial, cultural, and psychological data. COGBASE is especially ideal for social Big Data, which tends to involve highly implicit contexts, cognitive artifacts, difficult-to-parse texts, and deep domain knowledge dependencies. Copyright © 2014 Elsevier Ltd. All rights reserved.
Depeursinge, Adrien; Kurtz, Camille; Beaulieu, Christopher; Napel, Sandy; Rubin, Daniel
2014-08-01
We describe a framework to model visual semantics of liver lesions in CT images in order to predict the visual semantic terms (VST) reported by radiologists in describing these lesions. Computational models of VST are learned from image data using linear combinations of high-order steerable Riesz wavelets and support vector machines (SVM). In a first step, these models are used to predict the presence of each semantic term that describes liver lesions. In a second step, the distances between all VST models are calculated to establish a nonhierarchical computationally-derived ontology of VST containing inter-term synonymy and complementarity. A preliminary evaluation of the proposed framework was carried out using 74 liver lesions annotated with a set of 18 VSTs from the RadLex ontology. A leave-one-patient-out cross-validation resulted in an average area under the ROC curve of 0.853 for predicting the presence of each VST. The proposed framework is expected to foster human-computer synergies for the interpretation of radiological images while using rotation-covariant computational models of VSTs to 1) quantify their local likelihood and 2) explicitly link them with pixel-based image content in the context of a given imaging domain.
The validity and reliability of a simple semantic classification of foot posture.
Cross, Hugh A; Lehman, Linda
2008-12-01
The Simple Semantic Classification (SSC) is described as a pragmatic method to assist in the assessment of the weight bearing foot. It was designed for application by therapists and technicians working in underdeveloped situations, after they have had basic orientation in foot function. To present evidence of the validity and inter observer reliability of the SSC. 13 physiotherapists from LEPRA India projects and 12 physical therapists functioning within the National Programme for the Elimination of Hansen's Disease (PNEH), Brazil, participated in an inter-observer exercise. Inter-observer agreement was gauged using the Kappa statistic. The results of the inter-observer exercise were dependent on observations of foot posture made from photographs. This was necessary to ensure that the procedure was standardised for participants in different countries. The method had limitations which were partly reflected in the results. The level of agreement between the principle investigator and Indian physiotherapists was Kappa = 058. The level of agreement between Brazilian physical therapists and the principle investigator was Kappa = 0.70. The authors opine that the results were sufficiently compelling to suggest that the Simple Semantic Classification can be used as a field method to identify people at increased risk of foot pathologies.
A PRACTICAL ONTOLOGY FOR THE LARGE-SCALE MODELING OF SCHOLARLY ARTIFACTS AND THEIR USAGE
DOE Office of Scientific and Technical Information (OSTI.GOV)
RODRIGUEZ, MARKO A.; BOLLEN, JOHAN; VAN DE SOMPEL, HERBERT
2007-01-30
The large-scale analysis of scholarly artifact usage is constrained primarily by current practices in usage data archiving, privacy issues concerned with the dissemination of usage data, and the lack of a practical ontology for modeling the usage domain. As a remedy to the third constraint, this article presents a scholarly ontology that was engineered to represent those classes for which large-scale bibliographic and usage data exists, supports usage research, and whose instantiation is scalable to the order of 50 million articles along with their associated artifacts (e.g. authors and journals) and an accompanying 1 billion usage events. The real worldmore » instantiation of the presented abstract ontology is a semantic network model of the scholarly community which lends the scholarly process to statistical analysis and computational support. They present the ontology, discuss its instantiation, and provide some example inference rules for calculating various scholarly artifact metrics.« less
Attention during natural vision warps semantic representation across the human brain.
Çukur, Tolga; Nishimoto, Shinji; Huth, Alexander G; Gallant, Jack L
2013-06-01
Little is known about how attention changes the cortical representation of sensory information in humans. On the basis of neurophysiological evidence, we hypothesized that attention causes tuning changes to expand the representation of attended stimuli at the cost of unattended stimuli. To investigate this issue, we used functional magnetic resonance imaging to measure how semantic representation changed during visual search for different object categories in natural movies. We found that many voxels across occipito-temporal and fronto-parietal cortex shifted their tuning toward the attended category. These tuning shifts expanded the representation of the attended category and of semantically related, but unattended, categories, and compressed the representation of categories that were semantically dissimilar to the target. Attentional warping of semantic representation occurred even when the attended category was not present in the movie; thus, the effect was not a target-detection artifact. These results suggest that attention dynamically alters visual representation to optimize processing of behaviorally relevant objects during natural vision.
Attention During Natural Vision Warps Semantic Representation Across the Human Brain
Çukur, Tolga; Nishimoto, Shinji; Huth, Alexander G.; Gallant, Jack L.
2013-01-01
Little is known about how attention changes the cortical representation of sensory information in humans. Based on neurophysiological evidence, we hypothesized that attention causes tuning changes to expand the representation of attended stimuli at the cost of unattended stimuli. To investigate this issue we used functional MRI (fMRI) to measure how semantic representation changes when searching for different object categories in natural movies. We find that many voxels across occipito-temporal and fronto-parietal cortex shift their tuning toward the attended category. These tuning shifts expand the representation of the attended category and of semantically-related but unattended categories, and compress the representation of categories semantically-dissimilar to the target. Attentional warping of semantic representation occurs even when the attended category is not present in the movie, thus the effect is not a target-detection artifact. These results suggest that attention dynamically alters visual representation to optimize processing of behaviorally relevant objects during natural vision. PMID:23603707
NASA Astrophysics Data System (ADS)
Paulraj, D.; Swamynathan, S.; Madhaiyan, M.
2012-11-01
Web Service composition has become indispensable as a single web service cannot satisfy complex functional requirements. Composition of services has received much interest to support business-to-business (B2B) or enterprise application integration. An important component of the service composition is the discovery of relevant services. In Semantic Web Services (SWS), service discovery is generally achieved by using service profile of Ontology Web Languages for Services (OWL-S). The profile of the service is a derived and concise description but not a functional part of the service. The information contained in the service profile is sufficient for atomic service discovery, but it is not sufficient for the discovery of composite semantic web services (CSWS). The purpose of this article is two-fold: first to prove that the process model is a better choice than the service profile for service discovery. Second, to facilitate the composition of inter-organisational CSWS by proposing a new composition method which uses process ontology. The proposed service composition approach uses an algorithm which performs a fine grained match at the level of atomic process rather than at the level of the entire service in a composite semantic web service. Many works carried out in this area have proposed solutions only for the composition of atomic services and this article proposes a solution for the composition of composite semantic web services.
Perrone-Bertolotti, Marcela; Lemonnier, Sophie; Baciu, Monica
2013-01-01
HIGHLIGHTSThe redundant bilateral visual presentation of verbal stimuli decreases asymmetry and increases the cooperation between the two hemispheres.The increased cooperation between the hemispheres is related to semantic information during lexical processing.The inter-hemispheric interaction is represented by both inhibition and cooperation. This study explores inter-hemispheric interaction (IHI) during a lexical decision task by using a behavioral approach, the bilateral presentation of stimuli within a divided visual field experiment. Previous studies have shown that compared to unilateral presentation, the bilateral redundant (BR) presentation decreases the inter-hemispheric asymmetry and facilitates the cooperation between hemispheres. However, it is still poorly understood which type of information facilitates this cooperation. In the present study, verbal stimuli were presented unilaterally (left or right visual hemi-field successively) and bilaterally (left and right visual hemi-field simultaneously). Moreover, during the bilateral presentation of stimuli, we manipulated the relationship between target and distractors in order to specify the type of information which modulates the IHI. Thus, three types of information were manipulated: perceptual, semantic, and decisional, respectively named pre-lexical, lexical and post-lexical processing. Our results revealed left hemisphere (LH) lateralization during the lexical decision task. In terms of inter-hemisphere interaction, the perceptual and decision-making information increased the inter-hemispheric asymmetry, suggesting the inhibition of one hemisphere upon the other. In contrast, semantic information decreased the inter-hemispheric asymmetry, suggesting cooperation between the hemispheres. We discussed our results according to current models of IHI and concluded that cerebral hemispheres interact and communicate according to various excitatory and inhibitory mechanisms, all which depend on specific processes and various levels of word processing.
Perrone-Bertolotti, Marcela; Lemonnier, Sophie; Baciu, Monica
2013-01-01
HIGHLIGHTS The redundant bilateral visual presentation of verbal stimuli decreases asymmetry and increases the cooperation between the two hemispheres.The increased cooperation between the hemispheres is related to semantic information during lexical processing.The inter-hemispheric interaction is represented by both inhibition and cooperation. This study explores inter-hemispheric interaction (IHI) during a lexical decision task by using a behavioral approach, the bilateral presentation of stimuli within a divided visual field experiment. Previous studies have shown that compared to unilateral presentation, the bilateral redundant (BR) presentation decreases the inter-hemispheric asymmetry and facilitates the cooperation between hemispheres. However, it is still poorly understood which type of information facilitates this cooperation. In the present study, verbal stimuli were presented unilaterally (left or right visual hemi-field successively) and bilaterally (left and right visual hemi-field simultaneously). Moreover, during the bilateral presentation of stimuli, we manipulated the relationship between target and distractors in order to specify the type of information which modulates the IHI. Thus, three types of information were manipulated: perceptual, semantic, and decisional, respectively named pre-lexical, lexical and post-lexical processing. Our results revealed left hemisphere (LH) lateralization during the lexical decision task. In terms of inter-hemisphere interaction, the perceptual and decision-making information increased the inter-hemispheric asymmetry, suggesting the inhibition of one hemisphere upon the other. In contrast, semantic information decreased the inter-hemispheric asymmetry, suggesting cooperation between the hemispheres. We discussed our results according to current models of IHI and concluded that cerebral hemispheres interact and communicate according to various excitatory and inhibitory mechanisms, all which depend on specific processes and various levels of word processing. PMID:23818879
Yue, Dong; Fan Rong, Cheng; Ning, Cai; Liang, Hu; Ai Lian, Liu; Ru Xin, Wang; Ya Hong, Luo
2018-07-01
Background The evaluation of hip arthroplasty is a challenge in computed tomography (CT). The virtual monochromatic spectral (VMS) images with metal artifact reduction software (MARs) in spectral CT can reduce the artifacts and improve the image quality. Purpose To evaluate the effects of VMS images and MARs for metal artifact reduction in patients with unilateral hip arthroplasty. Material and Methods Thirty-five patients underwent dual-energy CT. Four sets of VMS images without MARs and four sets of VMS images with MARs were obtained. Artifact index (AI), CT number, and SD value were assessed at the periprosthetic region and the pelvic organs. The scores of two observers for different images and the inter-observer agreement were evaluated. Results The AIs in 120 and 140 keV images were significantly lower than those in 80 and 100 keV images. The AIs of the periprosthetic region in VMS images with MARs were significantly lower than those in VMS images without MARs, while the AIs of pelvic organs were not significantly different. VMS images with MARs improved the accuracy of CT numbers for the periprosthetic region. The inter-observer agreements were good for all the images. VMS images with MARs at 120 and 140 keV had higher subjective scores and could improve the image quality, leading to reliable diagnosis of prosthesis-related problems. Conclusion VMS images with MARs at 120 and 140 keV could significantly reduce the artifacts from hip arthroplasty and improve the image quality at the periprosthetic region but had no obvious advantage for pelvic organs.
A Semantic Web-Based Methodology for Describing Scientific Research Efforts
ERIC Educational Resources Information Center
Gandara, Aida
2013-01-01
Scientists produce research resources that are useful to future research and innovative efforts. In a typical scientific scenario, the results created by a collaborative team often include numerous artifacts, observations and relationships relevant to research findings, such as programs that generate data, parameters that impact outputs, workflows…
Is Young Children's Passive Syntax Semantically Constrained? Evidence from Syntactic Priming
ERIC Educational Resources Information Center
Messenger, Katherine; Branigan, Holly P.; McLean, Janet F.; Sorace, Antonella
2012-01-01
Previous research suggests that English-speaking children comprehend agent-patient verb passives earlier than experiencer-theme verb passives (Maratsos, Fox, Becker, & Chalkley, 1985). We report three experiments examining whether such effects reflect delayed acquisition of the passive syntax or instead are an artifact of the experimental task,…
Partonomies for interactive explorable 3D-models of anatomy.
Schubert, R; Höhne, K H
1998-01-01
We introduce a concept to model subtle part-whole-semantics for the use with interactive 3d-models of human anatomy. Similar to experiences with modeling partonomies for physical artifacts like machines or buildings we found one unique part-whole-relation to be insufficient to represent anatomical reality. This claim will be illustrated with anatomical examples. According to the requirements these examples demand, a semantic classification of part-whole-relations is introduced. Initial results in modeling anatomical partonomies for a 3d-visualization environment proved this approach to be an promising way to represent anatomy and to enable powerful complex inferences.
A Novel Stimulus Artifact Removal Technique for High-Rate Electrical Stimulation
Heffer, Leon F; Fallon, James B
2008-01-01
Electrical stimulus artifact corrupting electrophysiological recordings often make the subsequent analysis of the underlying neural response difficult. This is particularly evident when investigating short-latency neural activity in response to high-rate electrical stimulation. We developed and evaluated an off-line technique for the removal of stimulus artifact from electrophysiological recordings. Pulsatile electrical stimulation was presented at rates of up to 5000 pulses/s during extracellular recordings of guinea pig auditory nerve fibers. Stimulus artifact was removed by replacing the sample points at each stimulus artifact event with values interpolated along a straight line, computed from neighbouring sample points. This technique required only that artifact events be identifiable and that the artifact duration remained less than both the inter-stimulus interval and the time course of the action potential. We have demonstrated that this computationally efficient sample-and-interpolate technique removes the stimulus artifact with minimal distortion of the action potential waveform. We suggest that this technique may have potential applications in a range of electrophysiological recording systems. PMID:18339428
Olson, Ingrid R.
2012-01-01
Famous people and artifacts are referred to as “unique entities” (UEs) due to the unique nature of the knowledge we have about them. Past imaging and lesion experiments have indicated that the anterior temporal lobes (ATLs) as having a special role in the processing of UEs. It has remained unclear which attributes of UEs were responsible for the observed effects in imaging experiments. In this study, we investigated what factors of UEs influence brain activity. In a training paradigm, we systematically varied the uniqueness of semantic associations, the presence/absence of a proper name, and the number of semantic associations to determine factors modulating activity in regions subserving the processing of UEs. We found that a conjunction of unique semantic information and proper names modulated activity within a section of the left ATL. Overall, the processing of UEs involved a wider left-hemispheric cortical network. Within these regions, brain activity was significantly affected by the unique semantic attributes especially in the presence of a proper name, but we could not find evidence for an effect of the number of semantic associations. Findings are discussed in regard to current models of ATL function, the neurophysiology of semantics, and social cognitive processing. PMID:22021913
Inborn and experience-dependent models of categorical brain organization. A position paper
Gainotti, Guido
2015-01-01
The present review aims to summarize the debate in contemporary neuroscience between inborn and experience-dependent models of conceptual representations that goes back to the description of category-specific semantic disorders for biological and artifact categories. Experience-dependent models suggest that categorical disorders are the by-product of the differential weighting of different sources of knowledge in the representation of biological and artifact categories. These models maintain that semantic disorders are not really category-specific, because they do not respect the boundaries between different categories. They also argue that the brain structures which are disrupted in a given type of category-specific semantic disorder should correspond to the areas of convergence of the sensory-motor information which play a major role in the construction of that category. Furthermore, they provide a simple interpretation of gender-related categorical effects and are supported by studies assessing the importance of prior experience in the cortical representation of objects On the other hand, inborn models maintain that category-specific semantic disorders reflect the disruption of innate brain networks, which are shaped by natural selection to allow rapid identification of objects that are very relevant for survival. From the empirical point of view, these models are mainly supported by observations of blind subjects, which suggest that visual experience is not necessary for the emergence of category-specificity in the ventral stream of visual processing. The weight of the data supporting experience-dependent and inborn models is thoroughly discussed, stressing the fact observations made in blind subjects are still the subject of intense debate. It is concluded that at the present state of knowledge it is not possible to choose between experience-dependent and inborn models of conceptual representations. PMID:25667570
Accelerating Cancer Systems Biology Research through Semantic Web Technology
Wang, Zhihui; Sagotsky, Jonathan; Taylor, Thomas; Shironoshita, Patrick; Deisboeck, Thomas S.
2012-01-01
Cancer systems biology is an interdisciplinary, rapidly expanding research field in which collaborations are a critical means to advance the field. Yet the prevalent database technologies often isolate data rather than making it easily accessible. The Semantic Web has the potential to help facilitate web-based collaborative cancer research by presenting data in a manner that is self-descriptive, human and machine readable, and easily sharable. We have created a semantically linked online Digital Model Repository (DMR) for storing, managing, executing, annotating, and sharing computational cancer models. Within the DMR, distributed, multidisciplinary, and inter-organizational teams can collaborate on projects, without forfeiting intellectual property. This is achieved by the introduction of a new stakeholder to the collaboration workflow, the institutional licensing officer, part of the Technology Transfer Office. Furthermore, the DMR has achieved silver level compatibility with the National Cancer Institute’s caBIG®, so users can not only interact with the DMR through a web browser but also through a semantically annotated and secure web service. We also discuss the technology behind the DMR leveraging the Semantic Web, ontologies, and grid computing to provide secure inter-institutional collaboration on cancer modeling projects, online grid-based execution of shared models, and the collaboration workflow protecting researchers’ intellectual property. PMID:23188758
Accelerating cancer systems biology research through Semantic Web technology.
Wang, Zhihui; Sagotsky, Jonathan; Taylor, Thomas; Shironoshita, Patrick; Deisboeck, Thomas S
2013-01-01
Cancer systems biology is an interdisciplinary, rapidly expanding research field in which collaborations are a critical means to advance the field. Yet the prevalent database technologies often isolate data rather than making it easily accessible. The Semantic Web has the potential to help facilitate web-based collaborative cancer research by presenting data in a manner that is self-descriptive, human and machine readable, and easily sharable. We have created a semantically linked online Digital Model Repository (DMR) for storing, managing, executing, annotating, and sharing computational cancer models. Within the DMR, distributed, multidisciplinary, and inter-organizational teams can collaborate on projects, without forfeiting intellectual property. This is achieved by the introduction of a new stakeholder to the collaboration workflow, the institutional licensing officer, part of the Technology Transfer Office. Furthermore, the DMR has achieved silver level compatibility with the National Cancer Institute's caBIG, so users can interact with the DMR not only through a web browser but also through a semantically annotated and secure web service. We also discuss the technology behind the DMR leveraging the Semantic Web, ontologies, and grid computing to provide secure inter-institutional collaboration on cancer modeling projects, online grid-based execution of shared models, and the collaboration workflow protecting researchers' intellectual property. Copyright © 2012 Wiley Periodicals, Inc.
Benedetti, L. R.; Holder, J. P.; Perkins, M.; ...
2016-02-26
We describe an experimental method to measure the gate profile of an x-ray framing camera and to determine several important functional parameters: relative gain (between strips), relative gain droop (within each strip), gate propagation velocity, gate width, and actual inter-strip timing. Several of these parameters cannot be measured accurately by any other technique. This method is then used to document cross talk-induced gain variations and artifacts created by radiation that arrives before the framing camera is actively amplifying x-rays. Electromagnetic cross talk can cause relative gains to vary significantly as inter-strip timing is varied. This imposes a stringent requirement formore » gain calibration. If radiation arrives before a framing camera is triggered, it can cause an artifact that manifests as a high-intensity, spatially varying background signal. Furthermore, we have developed a device that can be added to the framing camera head to prevent these artifacts.« less
Benedetti, L R; Holder, J P; Perkins, M; Brown, C G; Anderson, C S; Allen, F V; Petre, R B; Hargrove, D; Glenn, S M; Simanovskaia, N; Bradley, D K; Bell, P
2016-02-01
We describe an experimental method to measure the gate profile of an x-ray framing camera and to determine several important functional parameters: relative gain (between strips), relative gain droop (within each strip), gate propagation velocity, gate width, and actual inter-strip timing. Several of these parameters cannot be measured accurately by any other technique. This method is then used to document cross talk-induced gain variations and artifacts created by radiation that arrives before the framing camera is actively amplifying x-rays. Electromagnetic cross talk can cause relative gains to vary significantly as inter-strip timing is varied. This imposes a stringent requirement for gain calibration. If radiation arrives before a framing camera is triggered, it can cause an artifact that manifests as a high-intensity, spatially varying background signal. We have developed a device that can be added to the framing camera head to prevent these artifacts.
Functional fixedness in a technologically sparse culture.
German, Tim P; Barrett, H Clark
2005-01-01
Problem solving can be inefficient when the solution requires subjects to generate an atypical function for an object and the object's typical function has been primed. Subjects become "fixed" on the design function of the object, and problem solving suffers relative to control conditions in which the object's function is not demonstrated. In the current study, such functional fixedness was demonstrated in a sample of adolescents (mean age of 16 years) among the Shuar of Ecuadorian Amazonia, whose technologically sparse culture provides limited access to large numbers of artifacts with highly specialized functions. This result suggests that design function may universally be the core property of artifact concepts in human semantic memory.
Chang, Hing-Chiu; Hui, Edward S; Chiu, Pui-Wai; Liu, Xiaoxi; Chen, Nan-Kuei
2018-05-01
Three-dimensional (3D) multiplexed sensitivity encoding and reconstruction (3D-MUSER) algorithm is proposed to reduce aliasing artifacts and signal corruption caused by inter-shot 3D phase variations in 3D diffusion-weighted echo planar imaging (DW-EPI). 3D-MUSER extends the original framework of multiplexed sensitivity encoding (MUSE) to a hybrid k-space-based reconstruction, thereby enabling the correction of inter-shot 3D phase variations. A 3D single-shot EPI navigator echo was used to measure inter-shot 3D phase variations. The performance of 3D-MUSER was evaluated by analyses of point-spread function (PSF), signal-to-noise ratio (SNR), and artifact levels. The efficacy of phase correction using 3D-MUSER for different slab thicknesses and b-values were investigated. Simulations showed that 3D-MUSER could eliminate artifacts because of through-slab phase variation and reduce noise amplification because of SENSE reconstruction. All aliasing artifacts and signal corruption in 3D interleaved DW-EPI acquired with different slab thicknesses and b-values were reduced by our new algorithm. A near-whole brain single-slab 3D DTI with 1.3-mm isotropic voxel acquired at 1.5T was successfully demonstrated. 3D phase correction for 3D interleaved DW-EPI data is made possible by 3D-MUSER, thereby improving feasible slab thickness and maximum feasible b-value. Magn Reson Med 79:2702-2712, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
Kielar, Aneta; Joanisse, Marc F
2011-01-01
Theories of morphological processing differ on the issue of how lexical and grammatical information are stored and accessed. A key point of contention is whether complex forms are decomposed during recognition (e.g., establish+ment), compared to forms that cannot be analyzed into constituent morphemes (e.g., apartment). In the present study, we examined these issues with respect to English derivational morphology by measuring ERP responses during a cross-modal priming lexical decision task. ERP priming effects for semantically and phonologically transparent derived words (government-govern) were compared to those of semantically opaque derived words (apartment-apart) as well as "quasi-regular" items that represent intermediate cases of morphological transparency (dresser-dress). Additional conditions independently manipulated semantic and phonological relatedness in non-derived words (semantics: couch-sofa; phonology: panel-pan). The degree of N400 ERP priming to morphological forms varied depending on the amount of semantic and phonological overlap between word types, rather than respecting a bivariate distinction between derived and opaque forms. Moreover, these effects could not be accounted for by semantic or phonological relatedness alone. The findings support the theory that morphological relatedness is graded rather than absolute, and depend on the joint contribution of form and meaning overlap. Copyright © 2010 Elsevier Ltd. All rights reserved.
Zhang, Shengwei; Arfanakis, Konstantinos
2012-01-01
Purpose To investigate the effect of standardized and study-specific human brain diffusion tensor templates on the accuracy of spatial normalization, without ignoring the important roles of data quality and registration algorithm effectiveness. Materials and Methods Two groups of diffusion tensor imaging (DTI) datasets, with and without visible artifacts, were normalized to two standardized diffusion tensor templates (IIT2, ICBM81) as well as study-specific templates, using three registration approaches. The accuracy of inter-subject spatial normalization was compared across templates, using the most effective registration technique for each template and group of data. Results It was demonstrated that, for DTI data with visible artifacts, the study-specific template resulted in significantly higher spatial normalization accuracy than standardized templates. However, for data without visible artifacts, the study-specific template and the standardized template of higher quality (IIT2) resulted in similar normalization accuracy. Conclusion For DTI data with visible artifacts, a carefully constructed study-specific template may achieve higher normalization accuracy than that of standardized templates. However, as DTI data quality improves, a high-quality standardized template may be more advantageous than a study-specific template, since in addition to high normalization accuracy, it provides a standard reference across studies, as well as automated localization/segmentation when accompanied by anatomical labels. PMID:23034880
Ringe, Kristina Imeen; Luetkens, Julian A; Fimmers, Rolf; Hammerstingl, Renate Maria; Layer, Günter; Maurer, Martin H; Nähle, Claas Philip; Michalik, Sabine; Reimer, Peter; Schraml, Christina; Schreyer, Andreas G; Stumpp, Patrick; Vogl, Thomas J; Wacker, Frank K; Willinek, Winfried; Kukuk, Guido Mattias
2018-04-01
To assess the interrater agreement and reliability of experienced abdominal radiologists in the characterization and grading of arterial phase gadoxetate disodium-related respiratory motion artifact on liver MRI. This prospective multicenter study was initiated by the working group for abdominal imaging within the German Roentgen Society (DRG), and approved by the local IRB of each participating center. 11 board-certified radiologists independently reviewed 40 gadoxetate disodium-enhanced liver MRI datasets. Motion artifacts in the arterial phase were assessed on a 5-point scale. Interrater agreement and reliability were calculated using the intraclass correlation coefficient (ICC) and Kendall coefficient of concordance (W), with p < 0.05 deemed significant. The ICC for interrater agreement and reliability were 0.983 (CI 0.973 - 0.990) and 0.985 (CI 0.978 - 0.991), respectively (both p < 0.0001), indicating excellent agreement and reliability. Kendall's W for interrater agreement was 0.865. A severe motion artifact, defined as a mean motion score ≥ 4 in the arterial phase was observed in 12 patients. In these specific cases, a motion score ≥ 4 was assigned by all readers in 75 % (n = 9/12 cases). Differentiation and grading of arterial phase respiratory motion artifact is possible with a high level of inter-/intrarater agreement and interrater reliability, which is crucial for assessing the incidence of this phenomenon in larger multicenter studies. · Inter- and intrarater agreement for motion artifact scoring is excellent among experienced readers.. · Interrater reliability for motion artifact scoring is excellent among experienced readers.. · Characterization of severe motion artifacts proved feasible in this multicenter study.. · Ringe KI, Luetkens JA, Fimmers R et al. Characterization of Severe Arterial Phase Respiratory Motion Artifact on Gadoxetate Disodium-Enhanced MRI - Assessment of Interrater Agreement and Reliability. Fortschr Röntgenstr 2017; 190: 341 - 347. © Georg Thieme Verlag KG Stuttgart · New York.
Kuya, Keita; Shinohara, Yuki; Kato, Ayumi; Sakamoto, Makoto; Kurosaki, Masamichi; Ogawa, Toshihide
2017-03-01
The aim of this study is to assess the value of adaptive statistical iterative reconstruction (ASIR) and model-based iterative reconstruction (MBIR) for reduction of metal artifacts due to dental hardware in carotid CT angiography (CTA). Thirty-seven patients with dental hardware who underwent carotid CTA were included. CTA was performed with a GE Discovery CT750 HD scanner and reconstructed with filtered back projection (FBP), ASIR, and MBIR. We measured the standard deviation at the cervical segment of the internal carotid artery that was affected most by dental metal artifacts (SD 1 ) and the standard deviation at the common carotid artery that was not affected by the artifact (SD 2 ). We calculated the artifact index (AI) as follows: AI = [(SD 1 )2 - (SD 2 )2]1/2 and compared each AI for FBP, ASIR, and MBIR. Visual assessment of the internal carotid artery was also performed by two neuroradiologists using a five-point scale for each axial and reconstructed sagittal image. The inter-observer agreement was analyzed using weighted kappa analysis. MBIR significantly improved AI compared with FBP and ASIR (p < 0.001, each). We found no significant difference in AI between FBP and ASIR (p = 0.502). The visual score of MBIR was significantly better than those of FBP and ASIR (p < 0.001, each), whereas the scores of ASIR were the same as those of FBP. Kappa values indicated good inter-observer agreements in all reconstructed images (0.747-0.778). MBIR resulted in a significant reduction in artifact from dental hardware in carotid CTA.
Image-Based Techniques for Digitizing Environments and Artifacts
2003-01-01
renderings in Fig. 7, and Maya Martinez arranged for the use of the cultural ar- tifacts used in this work. This work has been funded by Interval...Electronic Imaging and Computer Graphics in Mu- seum and Archaeology , pages 199–209, 1996. [3] R. Baribeau, M. Rioux, and G. Godin. Color reflectance...artifacts. In Proc. 2nd Inter- national Symposium on Virtual Reality, Archaeology , and Cultural Heritage (VAST 2001), pages 333–342, December 2001. [12
Automatic classification of artifactual ICA-components for artifact removal in EEG signals.
Winkler, Irene; Haufe, Stefan; Tangermann, Michael
2011-08-02
Artifacts contained in EEG recordings hamper both, the visual interpretation by experts as well as the algorithmic processing and analysis (e.g. for Brain-Computer Interfaces (BCI) or for Mental State Monitoring). While hand-optimized selection of source components derived from Independent Component Analysis (ICA) to clean EEG data is widespread, the field could greatly profit from automated solutions based on Machine Learning methods. Existing ICA-based removal strategies depend on explicit recordings of an individual's artifacts or have not been shown to reliably identify muscle artifacts. We propose an automatic method for the classification of general artifactual source components. They are estimated by TDSEP, an ICA method that takes temporal correlations into account. The linear classifier is based on an optimized feature subset determined by a Linear Programming Machine (LPM). The subset is composed of features from the frequency-, the spatial- and temporal domain. A subject independent classifier was trained on 640 TDSEP components (reaction time (RT) study, n = 12) that were hand labeled by experts as artifactual or brain sources and tested on 1080 new components of RT data of the same study. Generalization was tested on new data from two studies (auditory Event Related Potential (ERP) paradigm, n = 18; motor imagery BCI paradigm, n = 80) that used data with different channel setups and from new subjects. Based on six features only, the optimized linear classifier performed on level with the inter-expert disagreement (<10% Mean Squared Error (MSE)) on the RT data. On data of the auditory ERP study, the same pre-calculated classifier generalized well and achieved 15% MSE. On data of the motor imagery paradigm, we demonstrate that the discriminant information used for BCI is preserved when removing up to 60% of the most artifactual source components. We propose a universal and efficient classifier of ICA components for the subject independent removal of artifacts from EEG data. Based on linear methods, it is applicable for different electrode placements and supports the introspection of results. Trained on expert ratings of large data sets, it is not restricted to the detection of eye- and muscle artifacts. Its performance and generalization ability is demonstrated on data of different EEG studies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Soyoung
Purpose: To investigate the use of local noise power spectrum (NPS) to characterize image noise and wavelet analysis to isolate defective pixels and inter-subpanel flat-fielding artifacts for quantitative quality assurance (QA) of electronic portal imaging devices (EPIDs). Methods: A total of 93 image sets including custom-made bar-pattern images and open exposure images were collected from four iViewGT a-Si EPID systems over three years. Global quantitative metrics such as modulation transform function (MTF), NPS, and detective quantum efficiency (DQE) were computed for each image set. Local NPS was also calculated for individual subpanels by sampling region of interests within each subpanelmore » of the EPID. The 1D NPS, obtained by radially averaging the 2D NPS, was fitted to a power-law function. The r-square value of the linear regression analysis was used as a singular metric to characterize the noise properties of individual subpanels of the EPID. The sensitivity of the local NPS was first compared with the global quantitative metrics using historical image sets. It was then compared with two commonly used commercial QA systems with images collected after applying two different EPID calibration methods (single-level gain and multilevel gain). To detect isolated defective pixels and inter-subpanel flat-fielding artifacts, Haar wavelet transform was applied on the images. Results: Global quantitative metrics including MTF, NPS, and DQE showed little change over the period of data collection. On the contrary, a strong correlation between the local NPS (r-square values) and the variation of the EPID noise condition was observed. The local NPS analysis indicated image quality improvement with the r-square values increased from 0.80 ± 0.03 (before calibration) to 0.85 ± 0.03 (after single-level gain calibration) and to 0.96 ± 0.03 (after multilevel gain calibration), while the commercial QA systems failed to distinguish the image quality improvement between the two calibration methods. With wavelet analysis, defective pixels and inter-subpanel flat-fielding artifacts were clearly identified as spikes after thresholding the inversely transformed images. Conclusions: The proposed local NPS (r-square values) showed superior sensitivity to the noise level variations of individual subpanels compared with global quantitative metrics such as MTF, NPS, and DQE. Wavelet analysis was effective in detecting isolated defective pixels and inter-subpanel flat-fielding artifacts. The proposed methods are promising for the early detection of imaging artifacts of EPIDs.« less
Compère, Laurie; Rari, Eirini; Gallarda, Thierry; Assens, Adèle; Nys, Marion; Coussinoux, Sandrine; Machefaux, Sébastien; Piolino, Pascale
2018-01-01
A recently tested hypothesis suggests that inter-individual differences in episodic autobiographical memory (EAM) are better explained by individual identification of typical features of a gender identity than by sex. This study aimed to test this hypothesis by investigating sex and gender related differences not only in EAM but also during retrieval of more abstract self-knowledge (i.e., semantic autobiographical memory, SAM, and conceptual self, CS), and considering past and future perspectives. No sex-related differences were identified, but regardless of the sex, feminine gender identity was associated with clear differences in emotional aspects that were expressed in both episodic and more abstract forms of AM, and in the past and future perspectives, while masculine gender identity was associated with limited effects. In conclusion, our results support the hypothesis that inter-individual differences in AM are better explained by gender identity than by sex, extending this assumption to both episodic and semantic forms of AM and future thinking. Copyright © 2017 Elsevier Inc. All rights reserved.
Cohen, Trevor; Schvaneveldt, Roger W; Rindflesch, Thomas C
2009-11-14
Corpus-derived distributional models of semantic distance between terms have proved useful in a number of applications. For both theoretical and practical reasons, it is desirable to extend these models to encode discrete concepts and the ways in which they are related to one another. In this paper, we present a novel vector space model that encodes semantic predications derived from MEDLINE by the SemRep system into a compact spatial representation. The associations captured by this method are of a different and complementary nature to those derived by traditional vector space models, and the encoding of predication types presents new possibilities for knowledge discovery and information retrieval.
A Robust Post-Processing Workflow for Datasets with Motion Artifacts in Diffusion Kurtosis Imaging
Li, Xianjun; Yang, Jian; Gao, Jie; Luo, Xue; Zhou, Zhenyu; Hu, Yajie; Wu, Ed X.; Wan, Mingxi
2014-01-01
Purpose The aim of this study was to develop a robust post-processing workflow for motion-corrupted datasets in diffusion kurtosis imaging (DKI). Materials and methods The proposed workflow consisted of brain extraction, rigid registration, distortion correction, artifacts rejection, spatial smoothing and tensor estimation. Rigid registration was utilized to correct misalignments. Motion artifacts were rejected by using local Pearson correlation coefficient (LPCC). The performance of LPCC in characterizing relative differences between artifacts and artifact-free images was compared with that of the conventional correlation coefficient in 10 randomly selected DKI datasets. The influence of rejected artifacts with information of gradient directions and b values for the parameter estimation was investigated by using mean square error (MSE). The variance of noise was used as the criterion for MSEs. The clinical practicality of the proposed workflow was evaluated by the image quality and measurements in regions of interest on 36 DKI datasets, including 18 artifact-free (18 pediatric subjects) and 18 motion-corrupted datasets (15 pediatric subjects and 3 essential tremor patients). Results The relative difference between artifacts and artifact-free images calculated by LPCC was larger than that of the conventional correlation coefficient (p<0.05). It indicated that LPCC was more sensitive in detecting motion artifacts. MSEs of all derived parameters from the reserved data after the artifacts rejection were smaller than the variance of the noise. It suggested that influence of rejected artifacts was less than influence of noise on the precision of derived parameters. The proposed workflow improved the image quality and reduced the measurement biases significantly on motion-corrupted datasets (p<0.05). Conclusion The proposed post-processing workflow was reliable to improve the image quality and the measurement precision of the derived parameters on motion-corrupted DKI datasets. The workflow provided an effective post-processing method for clinical applications of DKI in subjects with involuntary movements. PMID:24727862
A robust post-processing workflow for datasets with motion artifacts in diffusion kurtosis imaging.
Li, Xianjun; Yang, Jian; Gao, Jie; Luo, Xue; Zhou, Zhenyu; Hu, Yajie; Wu, Ed X; Wan, Mingxi
2014-01-01
The aim of this study was to develop a robust post-processing workflow for motion-corrupted datasets in diffusion kurtosis imaging (DKI). The proposed workflow consisted of brain extraction, rigid registration, distortion correction, artifacts rejection, spatial smoothing and tensor estimation. Rigid registration was utilized to correct misalignments. Motion artifacts were rejected by using local Pearson correlation coefficient (LPCC). The performance of LPCC in characterizing relative differences between artifacts and artifact-free images was compared with that of the conventional correlation coefficient in 10 randomly selected DKI datasets. The influence of rejected artifacts with information of gradient directions and b values for the parameter estimation was investigated by using mean square error (MSE). The variance of noise was used as the criterion for MSEs. The clinical practicality of the proposed workflow was evaluated by the image quality and measurements in regions of interest on 36 DKI datasets, including 18 artifact-free (18 pediatric subjects) and 18 motion-corrupted datasets (15 pediatric subjects and 3 essential tremor patients). The relative difference between artifacts and artifact-free images calculated by LPCC was larger than that of the conventional correlation coefficient (p<0.05). It indicated that LPCC was more sensitive in detecting motion artifacts. MSEs of all derived parameters from the reserved data after the artifacts rejection were smaller than the variance of the noise. It suggested that influence of rejected artifacts was less than influence of noise on the precision of derived parameters. The proposed workflow improved the image quality and reduced the measurement biases significantly on motion-corrupted datasets (p<0.05). The proposed post-processing workflow was reliable to improve the image quality and the measurement precision of the derived parameters on motion-corrupted DKI datasets. The workflow provided an effective post-processing method for clinical applications of DKI in subjects with involuntary movements.
ERIC Educational Resources Information Center
Gruenenfelder, Thomas M.; Recchia, Gabriel; Rubin, Tim; Jones, Michael N.
2016-01-01
We compared the ability of three different contextual models of lexical semantic memory (BEAGLE, Latent Semantic Analysis, and the Topic model) and of a simple associative model (POC) to predict the properties of semantic networks derived from word association norms. None of the semantic models were able to accurately predict all of the network…
Model Driven Engineering with Ontology Technologies
NASA Astrophysics Data System (ADS)
Staab, Steffen; Walter, Tobias; Gröner, Gerd; Parreiras, Fernando Silva
Ontologies constitute formal models of some aspect of the world that may be used for drawing interesting logical conclusions even for large models. Software models capture relevant characteristics of a software artifact to be developed, yet, most often these software models have limited formal semantics, or the underlying (often graphical) software language varies from case to case in a way that makes it hard if not impossible to fix its semantics. In this contribution, we survey the use of ontology technologies for software modeling in order to carry over advantages from ontology technologies to the software modeling domain. It will turn out that ontology-based metamodels constitute a core means for exploiting expressive ontology reasoning in the software modeling domain while remaining flexible enough to accommodate varying needs of software modelers.
Fieselmann, Andreas; Dennerlein, Frank; Deuerling-Zheng, Yu; Boese, Jan; Fahrig, Rebecca; Hornegger, Joachim
2011-06-21
Filtered backprojection is the basis for many CT reconstruction tasks. It assumes constant attenuation values of the object during the acquisition of the projection data. Reconstruction artifacts can arise if this assumption is violated. For example, contrast flow in perfusion imaging with C-arm CT systems, which have acquisition times of several seconds per C-arm rotation, can cause this violation. In this paper, we derived and validated a novel spatio-temporal model to describe these kinds of artifacts. The model separates the temporal dynamics due to contrast flow from the scan and reconstruction parameters. We introduced derivative-weighted point spread functions to describe the spatial spread of the artifacts. The model allows prediction of reconstruction artifacts for given temporal dynamics of the attenuation values. Furthermore, it can be used to systematically investigate the influence of different reconstruction parameters on the artifacts. We have shown that with optimized redundancy weighting function parameters the spatial spread of the artifacts around a typical arterial vessel can be reduced by about 70%. Finally, an inversion of our model could be used as the basis for novel dynamic reconstruction algorithms that further minimize these artifacts.
Trails of meaning construction: Symbolic artifacts engage the social brain.
Tylén, Kristian; Philipsen, Johanne Stege; Roepstorff, Andreas; Fusaroli, Riccardo
2016-07-01
Symbolic artifacts present a challenge to theories of neurocognitive processing due to their hybrid nature: they are at the same time physical objects and vehicles of intangible social meanings. While their physical properties can be read of their perceptual appearance, the meaning of symbolic artifacts depends on the perceiver's interpretative attitude and embeddedness in cultural practices. In this study, participants built models of LEGO bricks to illustrate their understanding of abstract concepts. They were then scanned with fMRI while presented to photographs of their own and others' models. When participants attended to the meaning of the models in contrast to their bare physical properties, we observed activations in mPFC and TPJ, areas often associated with social cognition, and IFG, possibly related to semantics. When contrasting own and others' models, we also found activations in precuneus, an area associated with autobiographical memory and agency, while looking at one's own collective models yielded interaction effects in rostral ACC, right IFG and left Insula. Interestingly, variability in the insula was predicted by individual differences in participants' feeling of relatedness to their fellow group members during LEGO construction activity. Our findings support a view of symbolic artifacts as neuro-cognitive trails of human social interactions. Copyright © 2016 Elsevier Inc. All rights reserved.
Topic Modeling of NASA Space System Problem Reports: Research in Practice
NASA Technical Reports Server (NTRS)
Layman, Lucas; Nikora, Allen P.; Meek, Joshua; Menzies, Tim
2016-01-01
Problem reports at NASA are similar to bug reports: they capture defects found during test, post-launch operational anomalies, and document the investigation and corrective action of the issue. These artifacts are a rich source of lessons learned for NASA, but are expensive to analyze since problem reports are comprised primarily of natural language text. We apply topic modeling to a corpus of NASA problem reports to extract trends in testing and operational failures. We collected 16,669 problem reports from six NASA space flight missions and applied Latent Dirichlet Allocation topic modeling to the document corpus. We analyze the most popular topics within and across missions, and how popular topics changed over the lifetime of a mission. We find that hardware material and flight software issues are common during the integration and testing phase, while ground station software and equipment issues are more common during the operations phase. We identify a number of challenges in topic modeling for trend analysis: 1) that the process of selecting the topic modeling parameters lacks definitive guidance, 2) defining semantically-meaningful topic labels requires nontrivial effort and domain expertise, 3) topic models derived from the combined corpus of the six missions were biased toward the larger missions, and 4) topics must be semantically distinct as well as cohesive to be useful. Nonetheless,topic modeling can identify problem themes within missions and across mission lifetimes, providing useful feedback to engineers and project managers.
Semantic annotation of consumer health questions.
Kilicoglu, Halil; Ben Abacha, Asma; Mrabet, Yassine; Shooshan, Sonya E; Rodriguez, Laritza; Masterton, Kate; Demner-Fushman, Dina
2018-02-06
Consumers increasingly use online resources for their health information needs. While current search engines can address these needs to some extent, they generally do not take into account that most health information needs are complex and can only fully be expressed in natural language. Consumer health question answering (QA) systems aim to fill this gap. A major challenge in developing consumer health QA systems is extracting relevant semantic content from the natural language questions (question understanding). To develop effective question understanding tools, question corpora semantically annotated for relevant question elements are needed. In this paper, we present a two-part consumer health question corpus annotated with several semantic categories: named entities, question triggers/types, question frames, and question topic. The first part (CHQA-email) consists of relatively long email requests received by the U.S. National Library of Medicine (NLM) customer service, while the second part (CHQA-web) consists of shorter questions posed to MedlinePlus search engine as queries. Each question has been annotated by two annotators. The annotation methodology is largely the same between the two parts of the corpus; however, we also explain and justify the differences between them. Additionally, we provide information about corpus characteristics, inter-annotator agreement, and our attempts to measure annotation confidence in the absence of adjudication of annotations. The resulting corpus consists of 2614 questions (CHQA-email: 1740, CHQA-web: 874). Problems are the most frequent named entities, while treatment and general information questions are the most common question types. Inter-annotator agreement was generally modest: question types and topics yielded highest agreement, while the agreement for more complex frame annotations was lower. Agreement in CHQA-web was consistently higher than that in CHQA-email. Pairwise inter-annotator agreement proved most useful in estimating annotation confidence. To our knowledge, our corpus is the first focusing on annotation of uncurated consumer health questions. It is currently used to develop machine learning-based methods for question understanding. We make the corpus publicly available to stimulate further research on consumer health QA.
New methods for analyzing semantic graph based assessments in science education
NASA Astrophysics Data System (ADS)
Vikaros, Lance Steven
This research investigated how the scoring of semantic graphs (known by many as concept maps) could be improved and automated in order to address issues of inter-rater reliability and scalability. As part of the NSF funded SENSE-IT project to introduce secondary school science students to sensor networks (NSF Grant No. 0833440), semantic graphs illustrating how temperature change affects water ecology were collected from 221 students across 16 schools. The graphing task did not constrain students' use of terms, as is often done with semantic graph based assessment due to coding and scoring concerns. The graphing software used provided real-time feedback to help students learn how to construct graphs, stay on topic and effectively communicate ideas. The collected graphs were scored by human raters using assessment methods expected to boost reliability, which included adaptations of traditional holistic and propositional scoring methods, use of expert raters, topical rubrics, and criterion graphs. High levels of inter-rater reliability were achieved, demonstrating that vocabulary constraints may not be necessary after all. To investigate a new approach to automating the scoring of graphs, thirty-two different graph features characterizing graphs' structure, semantics, configuration and process of construction were then used to predict human raters' scoring of graphs in order to identify feature patterns correlated to raters' evaluations of graphs' topical accuracy and complexity. Results led to the development of a regression model able to predict raters' scoring with 77% accuracy, with 46% accuracy expected when used to score new sets of graphs, as estimated via cross-validation tests. Although such performance is comparable to other graph and essay based scoring systems, cross-context testing of the model and methods used to develop it would be needed before it could be recommended for widespread use. Still, the findings suggest techniques for improving the reliability and scalability of semantic graph based assessments without requiring constraint of how ideas are expressed.
NASA Astrophysics Data System (ADS)
Sunitha, A.; Babu, G. Suresh
2014-11-01
Recent studies in the decision making efforts in the area of public healthcare systems have been tremendously inspired and influenced by the entry of ontology. Ontology driven systems results in the effective implementation of healthcare strategies for the policy makers. The central source of knowledge is the ontology containing all the relevant domain concepts such as locations, diseases, environments and their domain sensitive inter-relationships which is the prime objective, concern and the motivation behind this paper. The paper further focuses on the development of a semantic knowledge-base for public healthcare system. This paper describes the approach and methodologies in bringing out a novel conceptual theme in establishing a firm linkage between three different ontologies related to diseases, places and environments in one integrated platform. This platform correlates the real-time mechanisms prevailing within the semantic knowledgebase and establishing their inter-relationships for the first time in India. This is hoped to formulate a strong foundation for establishing a much awaited basic need for a meaningful healthcare decision making system in the country. Introduction through a wide range of best practices facilitate the adoption of this approach for better appreciation, understanding and long term outcomes in the area. The methods and approach illustrated in the paper relate to health mapping methods, reusability of health applications, and interoperability issues based on mapping of the data attributes with ontology concepts in generating semantic integrated data driving an inference engine for user-interfaced semantic queries.
Automatic Classification of Artifactual ICA-Components for Artifact Removal in EEG Signals
2011-01-01
Background Artifacts contained in EEG recordings hamper both, the visual interpretation by experts as well as the algorithmic processing and analysis (e.g. for Brain-Computer Interfaces (BCI) or for Mental State Monitoring). While hand-optimized selection of source components derived from Independent Component Analysis (ICA) to clean EEG data is widespread, the field could greatly profit from automated solutions based on Machine Learning methods. Existing ICA-based removal strategies depend on explicit recordings of an individual's artifacts or have not been shown to reliably identify muscle artifacts. Methods We propose an automatic method for the classification of general artifactual source components. They are estimated by TDSEP, an ICA method that takes temporal correlations into account. The linear classifier is based on an optimized feature subset determined by a Linear Programming Machine (LPM). The subset is composed of features from the frequency-, the spatial- and temporal domain. A subject independent classifier was trained on 640 TDSEP components (reaction time (RT) study, n = 12) that were hand labeled by experts as artifactual or brain sources and tested on 1080 new components of RT data of the same study. Generalization was tested on new data from two studies (auditory Event Related Potential (ERP) paradigm, n = 18; motor imagery BCI paradigm, n = 80) that used data with different channel setups and from new subjects. Results Based on six features only, the optimized linear classifier performed on level with the inter-expert disagreement (<10% Mean Squared Error (MSE)) on the RT data. On data of the auditory ERP study, the same pre-calculated classifier generalized well and achieved 15% MSE. On data of the motor imagery paradigm, we demonstrate that the discriminant information used for BCI is preserved when removing up to 60% of the most artifactual source components. Conclusions We propose a universal and efficient classifier of ICA components for the subject independent removal of artifacts from EEG data. Based on linear methods, it is applicable for different electrode placements and supports the introspection of results. Trained on expert ratings of large data sets, it is not restricted to the detection of eye- and muscle artifacts. Its performance and generalization ability is demonstrated on data of different EEG studies. PMID:21810266
Posatskiy, A O; Chau, T
2012-04-01
Mechanomyography (MMG) is an important kinesiological tool and potential communication pathway for individuals with disabilities. However, MMG is highly susceptible to contamination by motion artifact due to limb movement. A better understanding of the nature of this contamination and its effects on different sensing methods is required to inform robust MMG sensor design. Therefore, in this study, we recorded MMG from the extensor carpi ulnaris of six able-bodied participants using three different co-located condenser microphone and accelerometer pairings. Contractions at 30% MVC were recorded with and without a shaker-induced single-frequency forearm motion artifact delivered via a custom test rig. Using a signal-to-signal-plus-noise-ratio and the adaptive Neyman curve-based statistic, we found that microphone-derived MMG spectra were significantly less influenced by motion artifact than corresponding accelerometer-derived spectra (p⩽0.05). However, non-vanishing motion artifact harmonics were present in both spectra, suggesting that simple bandpass filtering may not remove artifact influences permeating into typical MMG bands of interest. Our results suggest that condenser microphones are preferred for MMG recordings when the mitigation of motion artifact effects is important. Copyright © 2011. Published by Elsevier Ltd.
Artifact removal in the context of group ICA: a comparison of single-subject and group approaches
Du, Yuhui; Allen, Elena A.; He, Hao; Sui, Jing; Wu, Lei; Calhoun, Vince D.
2018-01-01
Independent component analysis (ICA) has been widely applied to identify intrinsic brain networks from fMRI data. Group ICA computes group-level components from all data and subsequently estimates individual-level components to recapture inter-subject variability. However, the best approach to handle artifacts, which may vary widely among subjects, is not yet clear. In this work, we study and compare two ICA approaches for artifacts removal. One approach, recommended in recent work by the Human Connectome Project, first performs ICA on individual subject data to remove artifacts, and then applies a group ICA on the cleaned data from all subjects. We refer to this approach as Individual ICA based artifacts Removal Plus Group ICA (IRPG). A second proposed approach, called Group Information Guided ICA (GIG-ICA), performs ICA on group data, then removes the group-level artifact components, and finally performs subject-specific ICAs using the group-level non-artifact components as spatial references. We used simulations to evaluate the two approaches with respect to the effects of data quality, data quantity, variable number of sources among subjects, and spatially unique artifacts. Resting-state test-retest datasets were also employed to investigate the reliability of functional networks. Results from simulations demonstrate GIG-ICA has greater performance compared to IRPG, even in the case when single-subject artifacts removal is perfect and when individual subjects have spatially unique artifacts. Experiments using test-retest data suggest that GIG-ICA provides more reliable functional networks. Based on high estimation accuracy, ease of implementation, and high reliability of functional networks, we find GIG-ICA to be a promising approach. PMID:26859308
Hodgetts, Carl J; Postans, Mark; Warne, Naomi; Varnava, Alice; Lawrence, Andrew D; Graham, Kim S
2017-09-01
Autobiographical memory (AM) is multifaceted, incorporating the vivid retrieval of contextual detail (episodic AM), together with semantic knowledge that infuses meaning and coherence into past events (semantic AM). While neuropsychological evidence highlights a role for the hippocampus and anterior temporal lobe (ATL) in episodic and semantic AM, respectively, it is unclear whether these constitute dissociable large-scale AM networks. We used high angular resolution diffusion-weighted imaging and constrained spherical deconvolution-based tractography to assess white matter microstructure in 27 healthy young adult participants who were asked to recall past experiences using word cues. Inter-individual variation in the microstructure of the fornix (the main hippocampal input/output pathway) related to the amount of episodic, but not semantic, detail in AMs - independent of memory age. Conversely, microstructure of the inferior longitudinal fasciculus, linking occipitotemporal regions with ATL, correlated with semantic, but not episodic, AMs. Further, these significant correlations remained when controlling for hippocampal and ATL grey matter volume, respectively. This striking correlational double dissociation supports the view that distinct, large-scale distributed brain circuits underpin context and concepts in AM. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
Elimination of RF inhomogeneity effects in segmentation.
Agus, Onur; Ozkan, Mehmed; Aydin, Kubilay
2007-01-01
There are various methods proposed for the segmentation and analysis of MR images. However the efficiency of these techniques is effected by various artifacts that occur in the imaging system. One of the most encountered problems is the intensity variation across an image. To overcome this problem different methods are used. In this paper we propose a method for the elimination of intensity artifacts in segmentation of MRI images. Inter imager variations are also minimized to produce the same tissue segmentation for the same patient. A well-known multivariate classification algorithm, maximum likelihood is employed to illustrate the enhancement in segmentation.
Children's and Adults' Abilities To Use Episodic and Semantic Information To Derive Inferences.
ERIC Educational Resources Information Center
Bourg, Tammy M.; And Others
A study investigated children's and adults' abilities to derive inferences requiring the integration of two episodic premises (episodic inferences) and inferences requiring the integration of one episodic premise with extra-stimulus, semantic knowledge. Subjects, 95 kindergarten, third grade, seventh grade, and college students, watched either an…
Crangle, Colleen E.; Perreau-Guimaraes, Marcos; Suppes, Patrick
2013-01-01
This paper presents a new method of analysis by which structural similarities between brain data and linguistic data can be assessed at the semantic level. It shows how to measure the strength of these structural similarities and so determine the relatively better fit of the brain data with one semantic model over another. The first model is derived from WordNet, a lexical database of English compiled by language experts. The second is given by the corpus-based statistical technique of latent semantic analysis (LSA), which detects relations between words that are latent or hidden in text. The brain data are drawn from experiments in which statements about the geography of Europe were presented auditorily to participants who were asked to determine their truth or falsity while electroencephalographic (EEG) recordings were made. The theoretical framework for the analysis of the brain and semantic data derives from axiomatizations of theories such as the theory of differences in utility preference. Using brain-data samples from individual trials time-locked to the presentation of each word, ordinal relations of similarity differences are computed for the brain data and for the linguistic data. In each case those relations that are invariant with respect to the brain and linguistic data, and are correlated with sufficient statistical strength, amount to structural similarities between the brain and linguistic data. Results show that many more statistically significant structural similarities can be found between the brain data and the WordNet-derived data than the LSA-derived data. The work reported here is placed within the context of other recent studies of semantics and the brain. The main contribution of this paper is the new method it presents for the study of semantics and the brain and the focus it permits on networks of relations detected in brain data and represented by a semantic model. PMID:23799009
Delta, theta, beta, and gamma brain oscillations index levels of auditory sentence processing.
Mai, Guangting; Minett, James W; Wang, William S-Y
2016-06-01
A growing number of studies indicate that multiple ranges of brain oscillations, especially the delta (δ, <4Hz), theta (θ, 4-8Hz), beta (β, 13-30Hz), and gamma (γ, 30-50Hz) bands, are engaged in speech and language processing. It is not clear, however, how these oscillations relate to functional processing at different linguistic hierarchical levels. Using scalp electroencephalography (EEG), the current study tested the hypothesis that phonological and the higher-level linguistic (semantic/syntactic) organizations during auditory sentence processing are indexed by distinct EEG signatures derived from the δ, θ, β, and γ oscillations. We analyzed specific EEG signatures while subjects listened to Mandarin speech stimuli in three different conditions in order to dissociate phonological and semantic/syntactic processing: (1) sentences comprising valid disyllabic words assembled in a valid syntactic structure (real-word condition); (2) utterances with morphologically valid syllables, but not constituting valid disyllabic words (pseudo-word condition); and (3) backward versions of the real-word and pseudo-word conditions. We tested four signatures: band power, EEG-acoustic entrainment (EAE), cross-frequency coupling (CFC), and inter-electrode renormalized partial directed coherence (rPDC). The results show significant effects of band power and EAE of δ and θ oscillations for phonological, rather than semantic/syntactic processing, indicating the importance of tracking δ- and θ-rate phonetic patterns during phonological analysis. We also found significant β-related effects, suggesting tracking of EEG to the acoustic stimulus (high-β EAE), memory processing (θ-low-β CFC), and auditory-motor interactions (20-Hz rPDC) during phonological analysis. For semantic/syntactic processing, we obtained a significant effect of γ power, suggesting lexical memory retrieval or processing grammatical word categories. Based on these findings, we confirm that scalp EEG signatures relevant to δ, θ, β, and γ oscillations can index phonological and semantic/syntactic organizations separately in auditory sentence processing, compatible with the view that phonological and higher-level linguistic processing engage distinct neural networks. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Hussain, F.; Khairuddin, S.; Othman, H.
2017-01-01
An inter-laboratory comparison in relative humidity measurements among accredited laboratories has been coordinated by the National Metrology Institute of Malaysia. It was carried out to determine the performance of the participating laboratories. The objective of the comparison was to acknowledge the participating laboratories competencies and to verify the level of accuracies declared in their scope of accreditation, in accordance with the MS ISO/IEC 17025 accreditation. The measurement parameter involved was relative humidity for the range of 30-90 %rh at a nominal temperature of 50°C. Eight accredited laboratories participated in the inter-laboratory comparison. Two units of artifacts have been circulated among the participants as the transfer standards.
Semantic Elaboration: ERPs Reveal Rapid Transition from Novel to Known
ERIC Educational Resources Information Center
Bauer, Patricia J.; Jackson, Felicia L.
2015-01-01
Like language, semantic memory is productive: It extends itself through self-derivation of new information through logical processes such as analogy, deduction, and induction, for example. Though it is clear these productive processes occur, little is known about the time course over which newly self-derived information becomes incorporated into…
Enhancing acronym/abbreviation knowledge bases with semantic information.
Torii, Manabu; Liu, Hongfang
2007-10-11
In the biomedical domain, a terminology knowledge base that associates acronyms/abbreviations (denoted as SFs) with the definitions (denoted as LFs) is highly needed. For the construction such terminology knowledge base, we investigate the feasibility to build a system automatically assigning semantic categories to LFs extracted from text. Given a collection of pairs (SF,LF) derived from text, we i) assess the coverage of LFs and pairs (SF,LF) in the UMLS and justify the need of a semantic category assignment system; and ii) automatically derive name phrases annotated with semantic category and construct a system using machine learning. Utilizing ADAM, an existing collection of (SF,LF) pairs extracted from MEDLINE, our system achieved an f-measure of 87% when assigning eight UMLS-based semantic groups to LFs. The system has been incorporated into a web interface which integrates SF knowledge from multiple SF knowledge bases. Web site: http://gauss.dbb.georgetown.edu/liblab/SFThesurus.
Marelli, Marco; Baroni, Marco
2015-07-01
The present work proposes a computational model of morpheme combination at the meaning level. The model moves from the tenets of distributional semantics, and assumes that word meanings can be effectively represented by vectors recording their co-occurrence with other words in a large text corpus. Given this assumption, affixes are modeled as functions (matrices) mapping stems onto derived forms. Derived-form meanings can be thought of as the result of a combinatorial procedure that transforms the stem vector on the basis of the affix matrix (e.g., the meaning of nameless is obtained by multiplying the vector of name with the matrix of -less). We show that this architecture accounts for the remarkable human capacity of generating new words that denote novel meanings, correctly predicting semantic intuitions about novel derived forms. Moreover, the proposed compositional approach, once paired with a whole-word route, provides a new interpretative framework for semantic transparency, which is here partially explained in terms of ease of the combinatorial procedure and strength of the transformation brought about by the affix. Model-based predictions are in line with the modulation of semantic transparency on explicit intuitions about existing words, response times in lexical decision, and morphological priming. In conclusion, we introduce a computational model to account for morpheme combination at the meaning level. The model is data-driven, theoretically sound, and empirically supported, and it makes predictions that open new research avenues in the domain of semantic processing. (PsycINFO Database Record (c) 2015 APA, all rights reserved).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hicks, P.A.; Pippin, L.C.; Henton, G.H.
1991-12-01
In the Spring of 1986 Desert Research Institute (DRI) conducted a Class III archaeological survey of Drill Hole U20aw on the Nevada Test Site, Nye County, Nevada. Seven archaeological sites were located during the course of this survey including two temporary camps, four lithic scatters, and a possible pinyon cache. This report presents the results of the analyses of the data derived from all sites investigated during the data recovery operations on Drill Hole U20aw. Detailed analyses were focused on the spatial distribution of artifacts and features within and between sites in the southern portion of the study area (26Ny4867,more » 26Ny4869, and 26Ny4871). These analyses indicate that 26Ny4871 served principally as a temporary camp, while the area around the canyonhead to the east (which includes 26Ny4867 and 26Ny4869) seems to have been used as a site for both temporary camps and special activity loci. Projectile point styles suggest that the area was occupied from the Early Archaic into the early Historic period. Analyses of the artifacts that were recovered indicate that obsidian was the preferred material for all classes of flaked stone tools. All stages of lithic reduction are represented on the sites, but core reduction and thinning of bifaces appear to have been the primary activities. Processing of floral foods is indicated by the presence of several ground stone artifacts. Pinyon nuts and other items appear to have been stored in the area of 26Ny4869 and to the north of the drill hole as evidenced by the presence of several rock features that may have served as caches.« less
Inter and intrasite analyses of cultural materials from U20aw, Nye County, Nevada
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hicks, P.A.; Pippin, L.C.; Henton, G.H.
1991-12-01
In the Spring of 1986 Desert Research Institute (DRI) conducted a Class III archaeological survey of Drill Hole U20aw on the Nevada Test Site, Nye County, Nevada. Seven archaeological sites were located during the course of this survey including two temporary camps, four lithic scatters, and a possible pinyon cache. This report presents the results of the analyses of the data derived from all sites investigated during the data recovery operations on Drill Hole U20aw. Detailed analyses were focused on the spatial distribution of artifacts and features within and between sites in the southern portion of the study area (26Ny4867,more » 26Ny4869, and 26Ny4871). These analyses indicate that 26Ny4871 served principally as a temporary camp, while the area around the canyonhead to the east (which includes 26Ny4867 and 26Ny4869) seems to have been used as a site for both temporary camps and special activity loci. Projectile point styles suggest that the area was occupied from the Early Archaic into the early Historic period. Analyses of the artifacts that were recovered indicate that obsidian was the preferred material for all classes of flaked stone tools. All stages of lithic reduction are represented on the sites, but core reduction and thinning of bifaces appear to have been the primary activities. Processing of floral foods is indicated by the presence of several ground stone artifacts. Pinyon nuts and other items appear to have been stored in the area of 26Ny4869 and to the north of the drill hole as evidenced by the presence of several rock features that may have served as caches.« less
Grammatical gender effects on cognition: implications for language learning and language use.
Vigliocco, Gabriella; Vinson, David P; Paganelli, Federica; Dworzynski, Katharina
2005-11-01
In 4 experiments, the authors addressed the mechanisms by which grammatical gender (in Italian and German) may come to affect meaning. In Experiments 1 (similarity judgments) and 2 (semantic substitution errors), the authors found Italian gender effects for animals but not for artifacts; Experiment 3 revealed no comparable effects in German. These results suggest that gender effects arise as a generalization from an established association between gender of nouns and sex of human referents, extending to nouns referring to sexuated entities. Across languages, such effects are found when the language allows for easy mapping between gender of nouns and sex of human referents (Italian) but not when the mapping is less transparent (German). A final experiment provided further constraints: These effects during processing arise at a lexical-semantic level rather than at a conceptual level. Copyright (c) 2005 APA, all rights reserved.
Category-specific semantic deficits: the role of familiarity and property type reexamined.
Bunn, E M; Tyler, L K; Moss, H E
1998-07-01
Category-specific deficits for living things have been explained variously as an artifact due to differences in the familiarity of concepts in different categories (E. Funnell & J. Sheridan, 1992) or as the result of an underlying impairment to sensory knowledge (E. K. Warrington & T. Shallice, 1984). Efforts to test these hypotheses empirically have been hindered by the shortcomings of currently available stimulus materials. A new set of stimuli are described that the authors developed to overcome the limitations of existing sets. The set consists of color photographs, matched across categories for familiarity and visual complexity. This set was used to test the semantic knowledge of a classic patient, J.B.R. (E. K. Warrington & T. Shallice, 1984). The results suggest that J.B.R.'s deficit for living things cannot be explained in terms of familiarity effects and that the most severely affected categories are those whose identification is most dependent on sensory information.
Buhk, J-H; Groth, M; Sehner, S; Fiehler, J; Schmidt, N O; Grzyska, U
2013-09-01
To evaluate a novel algorithm for correcting beam hardening artifacts caused by metal implants in computed tomography performed on a C-arm angiography system equipped with a flat panel (FP-CT). 16 datasets of cerebral FP-CT acquisitions after coil embolization of brain aneurysms in the context of acute subarachnoid hemorrhage have been reconstructed by applying a soft tissue kernel with and without a novel reconstruction filter for metal artifact correction. Image reading was performed in multiplanar reformations (MPR) in average mode on a dedicated radiological workplace in comparison to the preinterventional native multisection CT (MS-CT) scan serving as the anatomic gold standard. Two independent radiologists performed image scoring following a defined scale in direct comparison of the image data with and without artifact correction. For statistical analysis, a random intercept model was calculated. The inter-rater agreement was very high (ICC = 86.3 %). The soft tissue image quality and visualization of the CSF spaces at the level of the implants was substantially improved. The additional metal artifact correction algorithm did not induce impairment of the subjective image quality in any other brain regions. Adding metal artifact correction to FP-CT in an acute postinterventional setting helps to visualize the close vicinity of the aneurysm at a generally consistent image quality. © Georg Thieme Verlag KG Stuttgart · New York.
Development and validation of the Bush-Francis Catatonia Rating Scale - Brazilian version.
Nunes, Ana Letícia Santos; Filgueiras, Alberto; Nicolato, Rodrigo; Alvarenga, Jussara Mendonça; Silveira, Luciana Angélica Silva; Silva, Rafael Assis da; Cheniaux, Elie
2017-01-01
This article aims to describe the adaptation and translation process of the Bush-Francis Catatonia Rating Scale (BFCRS) and its reduced version, the Bush-Francis Catatonia Screening Instrument (BFCSI) for Brazilian Portuguese, as well as its validation. Semantic equivalence processes included four steps: translation, back translation, evaluation of semantic equivalence and a pilot-study. Validation consisted of simultaneous applications of the instrument in Portuguese by two examiners in 30 catatonic and 30 non-catatonic patients. Total scores averaged 20.07 for the complete scale and 7.80 for its reduced version among catatonic patients, compared with 0.47 and 0.20 among non-catatonic patients, respectively. Overall values of inter-rater reliability of the instruments were 0.97 for the BFCSI and 0.96 for the BFCRS. The scale's version in Portuguese proved to be valid and was able to distinguish between catatonic and non-catatonic patients. It was also reliable, with inter-evaluator reliability indexes as high as those of the original instrument.
The Semantic eScience Framework
NASA Astrophysics Data System (ADS)
McGuinness, Deborah; Fox, Peter; Hendler, James
2010-05-01
The goal of this effort is to design and implement a configurable and extensible semantic eScience framework (SESF). Configuration requires research into accommodating different levels of semantic expressivity and user requirements from use cases. Extensibility is being achieved in a modular approach to the semantic encodings (i.e. ontologies) performed in community settings, i.e. an ontology framework into which specific applications all the way up to communities can extend the semantics for their needs.We report on how we are accommodating the rapid advances in semantic technologies and tools and the sustainable software path for the future (certain) technical advances. In addition to a generalization of the current data science interface, we will present plans for an upper-level interface suitable for use by clearinghouses, and/or educational portals, digital libraries, and other disciplines.SESF builds upon previous work in the Virtual Solar-Terrestrial Observatory. The VSTO utilizes leading edge knowledge representation, query and reasoning techniques to support knowledge-enhanced search, data access, integration, and manipulation. It encodes term meanings and their inter-relationships in ontologies anduses these ontologies and associated inference engines to semantically enable the data services. The Semantically-Enabled Science Data Integration (SESDI) project implemented data integration capabilities among three sub-disciplines; solar radiation, volcanic outgassing and atmospheric structure using extensions to existingmodular ontolgies and used the VSTO data framework, while adding smart faceted search and semantic data registrationtools. The Semantic Provenance Capture in Data Ingest Systems (SPCDIS) has added explanation provenance capabilities to an observational data ingest pipeline for images of the Sun providing a set of tools to answer diverseend user questions such as ``Why does this image look bad?. http://tw.rpi.edu/portal/SESF
The Semantic eScience Framework
NASA Astrophysics Data System (ADS)
Fox, P. A.; McGuinness, D. L.
2009-12-01
The goal of this effort is to design and implement a configurable and extensible semantic eScience framework (SESF). Configuration requires research into accommodating different levels of semantic expressivity and user requirements from use cases. Extensibility is being achieved in a modular approach to the semantic encodings (i.e. ontologies) performed in community settings, i.e. an ontology framework into which specific applications all the way up to communities can extend the semantics for their needs.We report on how we are accommodating the rapid advances in semantic technologies and tools and the sustainable software path for the future (certain) technical advances. In addition to a generalization of the current data science interface, we will present plans for an upper-level interface suitable for use by clearinghouses, and/or educational portals, digital libraries, and other disciplines.SESF builds upon previous work in the Virtual Solar-Terrestrial Observatory. The VSTO utilizes leading edge knowledge representation, query and reasoning techniques to support knowledge-enhanced search, data access, integration, and manipulation. It encodes term meanings and their inter-relationships in ontologies anduses these ontologies and associated inference engines to semantically enable the data services. The Semantically-Enabled Science Data Integration (SESDI) project implemented data integration capabilities among three sub-disciplines; solar radiation, volcanic outgassing and atmospheric structure using extensions to existingmodular ontolgies and used the VSTO data framework, while adding smart faceted search and semantic data registrationtools. The Semantic Provenance Capture in Data Ingest Systems (SPCDIS) has added explanation provenance capabilities to an observational data ingest pipeline for images of the Sun providing a set of tools to answer diverseend user questions such as ``Why does this image look bad?.
NASA Astrophysics Data System (ADS)
Chen, K.; Weinmann, M.; Gao, X.; Yan, M.; Hinz, S.; Jutzi, B.; Weinmann, M.
2018-05-01
In this paper, we address the deep semantic segmentation of aerial imagery based on multi-modal data. Given multi-modal data composed of true orthophotos and the corresponding Digital Surface Models (DSMs), we extract a variety of hand-crafted radiometric and geometric features which are provided separately and in different combinations as input to a modern deep learning framework. The latter is represented by a Residual Shuffling Convolutional Neural Network (RSCNN) combining the characteristics of a Residual Network with the advantages of atrous convolution and a shuffling operator to achieve a dense semantic labeling. Via performance evaluation on a benchmark dataset, we analyze the value of different feature sets for the semantic segmentation task. The derived results reveal that the use of radiometric features yields better classification results than the use of geometric features for the considered dataset. Furthermore, the consideration of data on both modalities leads to an improvement of the classification results. However, the derived results also indicate that the use of all defined features is less favorable than the use of selected features. Consequently, data representations derived via feature extraction and feature selection techniques still provide a gain if used as the basis for deep semantic segmentation.
A Metaphorical Strategy: The Formation of the Semantics of Derived Adjectives
ERIC Educational Resources Information Center
Sadikova, Aida G.; Kajumova, Diana F.; Davletbaeva, Diana N.; Khasanova, Oxana V.; Karimova, Anna A.; Valiullina, Gulnaz F.
2016-01-01
The relevance of the presented problems due to the fact that reinterpreted the values producing the foundations and formation of the lexical meaning of the derived adjective occurs according to the laws of associative thinking and it should be explained through semantic-cognitive analysis. The goal of the article is the description and comparison…
ERIC Educational Resources Information Center
Jared, Debra; Jouravlev, Olessia; Joanisse, Marc F.
2017-01-01
Decomposition theories of morphological processing in visual word recognition posit an early morpho-orthographic parser that is blind to semantic information, whereas parallel distributed processing (PDP) theories assume that the transparency of orthographic-semantic relationships influences processing from the beginning. To test these…
Hagmann, Cornelia Franziska; Robertson, Nicola Jayne; Azzopardi, Denis
2006-12-01
This is a case report and a descriptive study demonstrating that artifacts are common during long-term recording of amplitude-integrated electroencephalograms and may lead to erroneous classification of the amplitude-integrated electroencephalogram trace. Artifacts occurred in 12% of 200 hours of recording time sampled from a representative sample of 20 infants with neonatal encephalopathy. Artifacts derived from electrical or movement interference occurred with similar frequency; both types of artifacts influenced the voltage and width of the amplitude-integrated electroencephalogram band. This is important knowledge especially if amplitude-integrated electroencephalogram is used as a selection tool for neuroprotection intervention studies.
ERIC Educational Resources Information Center
Mountain, Victoria Snow
This project includes an assortment of artifacts designed to inform high school students about the variety of geographical and cultural regions of Mexico. The artifacts, derived from seven different geographical/cultural regions of Mexico, include maps, posters, objects that symbolize the regional culture, and typical regional costumes, music, and…
The Role of Semantic Clustering in Optimal Memory Foraging.
Montez, Priscilla; Thompson, Graham; Kello, Christopher T
2015-11-01
Recent studies of semantic memory have investigated two theories of optimal search adopted from the animal foraging literature: Lévy flights and marginal value theorem. Each theory makes different simplifying assumptions and addresses different findings in search behaviors. In this study, an experiment is conducted to test whether clustering in semantic memory may play a role in evidence for both theories. Labeled magnets and a whiteboard were used to elicit spatial representations of semantic knowledge about animals. Category recall sequences from a separate experiment were used to trace search paths over the spatial representations of animal knowledge. Results showed that spatial distances between animal names arranged on the whiteboard were correlated with inter-response intervals (IRIs) during category recall, and distributions of both dependent measures approximated inverse power laws associated with Lévy flights. In addition, IRIs were relatively shorter when paths first entered animal clusters, and longer when they exited clusters, which is consistent with marginal value theorem. In conclusion, area-restricted searches over clustered semantic spaces may account for two different patterns of results interpreted as supporting two different theories of optimal memory foraging. Copyright © 2015 Cognitive Science Society, Inc.
Reference geometry-based detection of (4D-)CT motion artifacts: a feasibility study
NASA Astrophysics Data System (ADS)
Werner, René; Gauer, Tobias
2015-03-01
Respiration-correlated computed tomography (4D or 3D+t CT) can be considered as standard of care in radiation therapy treatment planning for lung and liver lesions. The decision about an application of motion management devices and the estimation of patient-specific motion effects on the dose distribution relies on precise motion assessment in the planning 4D CT data { which is impeded in case of CT motion artifacts. The development of image-based/post-processing approaches to reduce motion artifacts would benefit from precise detection and localization of the artifacts. Simple slice-by-slice comparison of intensity values and threshold-based analysis of related metrics suffer from- depending on the threshold- high false-positive or -negative rates. In this work, we propose exploiting prior knowledge about `ideal' (= artifact free) reference geometries to stabilize metric-based artifact detection by transferring (multi-)atlas-based concepts to this specific task. Two variants are introduced and evaluated: (S1) analysis and comparison of warped atlas data obtained by repeated non-linear atlas-to-patient registration with different levels of regularization; (S2) direct analysis of vector field properties (divergence, curl magnitude) of the atlas-to-patient transformation. Feasibility of approaches (S1) and (S2) is evaluated by motion-phantom data and intra-subject experiments (four patients) as well as - adopting a multi-atlas strategy- inter-subject investigations (twelve patients involved). It is demonstrated that especially sorting/double structure artifacts can be precisely detected and localized by (S1). In contrast, (S2) suffers from high false positive rates.
Herbert, C; Kissler, J
2014-09-26
In sentences such as dogs cannot fly/bark, evaluation of the truth-value of the sentence is assumed to appear after the negation has been integrated into the sentence structure. Moreover negation processing and truth-value processing are considered effortful processes, whereas processing of the semantic relatedness of the words within sentences is thought to occur automatically. In the present study, modulation of event-related brain potentials (N400 and late positive potential, LPP) was investigated during an implicit task (silent listening) and active truth-value evaluation to test these theoretical assumptions and determine if truth-value evaluation will be modulated by the way participants processed the negated information implicitly prior to truth-value verification. Participants first listened to negated sentences and then evaluated these sentences for their truth-value in an active evaluation task. During passive listening, the LPP was generally more pronounced for targets in false negative (FN) than true negative (TN) sentences, indicating enhanced attention allocation to semantically-related but false targets. N400 modulation by truth-value (FN>TN) was observed in 11 out of 24 participants. However, during active evaluation, processing of semantically-unrelated but true targets (TN) elicited larger N400 and LPP amplitudes as well as a pronounced frontal negativity. This pattern was particularly prominent in those 11 individuals, whose N400 modulation during silent listening indicated that they were more sensitive to violations of the truth-value than to semantic priming effects. The results provide evidence for implicit truth-value processing during silent listening of negated sentences and for task dependence related to inter-individual differences in implicit negation processing. Copyright © 2014 IBRO. Published by Elsevier Ltd. All rights reserved.
Reduction of variable-truncation artifacts from beam occlusion during in situ x-ray tomography
NASA Astrophysics Data System (ADS)
Borg, Leise; Jørgensen, Jakob S.; Frikel, Jürgen; Sporring, Jon
2017-12-01
Many in situ x-ray tomography studies require experimental rigs which may partially occlude the beam and cause parts of the projection data to be missing. In a study of fluid flow in porous chalk using a percolation cell with four metal bars drastic streak artifacts arise in the filtered backprojection (FBP) reconstruction at certain orientations. Projections with non-trivial variable truncation caused by the metal bars are the source of these variable-truncation artifacts. To understand the artifacts a mathematical model of variable-truncation data as a function of metal bar radius and distance to sample is derived and verified numerically and with experimental data. The model accurately describes the arising variable-truncation artifacts across simulated variations of the experimental setup. Three variable-truncation artifact-reduction methods are proposed, all aimed at addressing sinogram discontinuities that are shown to be the source of the streaks. The ‘reduction to limited angle’ (RLA) method simply keeps only non-truncated projections; the ‘detector-directed smoothing’ (DDS) method smooths the discontinuities; while the ‘reflexive boundary condition’ (RBC) method enforces a zero derivative at the discontinuities. Experimental results using both simulated and real data show that the proposed methods effectively reduce variable-truncation artifacts. The RBC method is found to provide the best artifact reduction and preservation of image features using both visual and quantitative assessment. The analysis and artifact-reduction methods are designed in context of FBP reconstruction motivated by computational efficiency practical for large, real synchrotron data. While a specific variable-truncation case is considered, the proposed methods can be applied to general data cut-offs arising in different in situ x-ray tomography experiments.
Do U Txt? Event-Related Potentials to Semantic Anomalies in Standard and Texted English
ERIC Educational Resources Information Center
Berger, Natalie I.; Coch, Donna
2010-01-01
Texted English is a hybrid, technology-based language derived from standard English modified to facilitate ease of communication via instant and text messaging. We compared semantic processing of texted and standard English sentences by recording event-related potentials in a classic semantic incongruity paradigm designed to elicit an N400 effect.…
Towards comprehensive syntactic and semantic annotations of the clinical narrative
Albright, Daniel; Lanfranchi, Arrick; Fredriksen, Anwen; Styler, William F; Warner, Colin; Hwang, Jena D; Choi, Jinho D; Dligach, Dmitriy; Nielsen, Rodney D; Martin, James; Ward, Wayne; Palmer, Martha; Savova, Guergana K
2013-01-01
Objective To create annotated clinical narratives with layers of syntactic and semantic labels to facilitate advances in clinical natural language processing (NLP). To develop NLP algorithms and open source components. Methods Manual annotation of a clinical narrative corpus of 127 606 tokens following the Treebank schema for syntactic information, PropBank schema for predicate-argument structures, and the Unified Medical Language System (UMLS) schema for semantic information. NLP components were developed. Results The final corpus consists of 13 091 sentences containing 1772 distinct predicate lemmas. Of the 766 newly created PropBank frames, 74 are verbs. There are 28 539 named entity (NE) annotations spread over 15 UMLS semantic groups, one UMLS semantic type, and the Person semantic category. The most frequent annotations belong to the UMLS semantic groups of Procedures (15.71%), Disorders (14.74%), Concepts and Ideas (15.10%), Anatomy (12.80%), Chemicals and Drugs (7.49%), and the UMLS semantic type of Sign or Symptom (12.46%). Inter-annotator agreement results: Treebank (0.926), PropBank (0.891–0.931), NE (0.697–0.750). The part-of-speech tagger, constituency parser, dependency parser, and semantic role labeler are built from the corpus and released open source. A significant limitation uncovered by this project is the need for the NLP community to develop a widely agreed-upon schema for the annotation of clinical concepts and their relations. Conclusions This project takes a foundational step towards bringing the field of clinical NLP up to par with NLP in the general domain. The corpus creation and NLP components provide a resource for research and application development that would have been previously impossible. PMID:23355458
Picking Up Artifacts: Storyboarding as a Gateway to Reuse
NASA Astrophysics Data System (ADS)
Wahid, Shahtab; Branham, Stacy M.; Cairco, Lauren; McCrickard, D. Scott; Harrison, Steve
Storyboarding offers designers the opportunity to illustrate a visual narrative of use. Because designers often refer to past ideas, we argue storyboards can be constructed by reusing shared artifacts. We present a study in which we explore how designers reuse artifacts consisting of images and rationale during storyboard construction. We find images can aid in accessing rationale and that connections among features aid in deciding what to reuse, creating new artifacts, and constructing. Based on requirements derived from our findings, we present a storyboarding tool, PIC-UP, to facilitate artifact sharing and reuse and evaluate its use in an exploratory study. We conclude with remarks on facilitating reuse and future work.
Scientific Knowledge Discovery in Complex Semantic Networks of Geophysical Systems
NASA Astrophysics Data System (ADS)
Fox, P.
2012-04-01
The vast majority of explorations of the Earth's systems are limited in their ability to effectively explore the most important (often most difficult) problems because they are forced to interconnect at the data-element, or syntactic, level rather than at a higher scientific, or semantic, level. Recent successes in the application of complex network theory and algorithms to climate data, raise expectations that more general graph-based approaches offer the opportunity for new discoveries. In the past ~ 5 years in the natural sciences there has substantial progress in providing both specialists and non-specialists the ability to describe in machine readable form, geophysical quantities and relations among them in meaningful and natural ways, effectively breaking the prior syntax barrier. The corresponding open-world semantics and reasoning provide higher-level interconnections. That is, semantics provided around the data structures, using semantically-equipped tools, and semantically aware interfaces between science application components allowing for discovery at the knowledge level. More recently, formal semantic approaches to continuous and aggregate physical processes are beginning to show promise and are soon likely to be ready to apply to geoscientific systems. To illustrate these opportunities, this presentation presents two application examples featuring domain vocabulary (ontology) and property relations (named and typed edges in the graphs). First, a climate knowledge discovery pilot encoding and exploration of CMIP5 catalog information with the eventual goal to encode and explore CMIP5 data. Second, a multi-stakeholder knowledge network for integrated assessments in marine ecosystems, where the data is highly inter-disciplinary.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Castillo, S; Castillo, R; Castillo, E
2014-06-15
Purpose: Artifacts arising from the 4D CT acquisition and post-processing methods add systematic uncertainty to the treatment planning process. We propose an alternate cine 4D CT acquisition and post-processing method to consistently reduce artifacts, and explore patient parameters indicative of image quality. Methods: In an IRB-approved protocol, 18 patients with primary thoracic malignancies received a standard cine 4D CT acquisition followed by an oversampling 4D CT that doubled the number of images acquired. A second cohort of 10 patients received the clinical 4D CT plus 3 oversampling scans for intra-fraction reproducibility. The clinical acquisitions were processed by the standard phasemore » sorting method. The oversampling acquisitions were processed using Dijkstras algorithm to optimize an artifact metric over available image data. Image quality was evaluated with a one-way mixed ANOVA model using a correlation-based artifact metric calculated from the final 4D CT image sets. Spearman correlations and a linear mixed model tested the association between breathing parameters, patient characteristics, and image quality. Results: The oversampling 4D CT scans reduced artifact presence significantly by 27% and 28%, for the first cohort and second cohort respectively. From cohort 2, the inter-replicate deviation for the oversampling method was within approximately 13% of the cross scan average at the 0.05 significance level. Artifact presence for both clinical and oversampling methods was significantly correlated with breathing period (ρ=0.407, p-value<0.032 clinical, ρ=0.296, p-value<0.041 oversampling). Artifact presence in the oversampling method was significantly correlated with amount of data acquired, (ρ=-0.335, p-value<0.02) indicating decreased artifact presence with increased breathing cycles per scan location. Conclusion: The 4D CT oversampling acquisition with optimized sorting reduced artifact presence significantly and reproducibly compared to the phase-sorted clinical acquisition.« less
Fan, Jung-Wei; Friedman, Carol
2011-01-01
Biomedical natural language processing (BioNLP) is a useful technique that unlocks valuable information stored in textual data for practice and/or research. Syntactic parsing is a critical component of BioNLP applications that rely on correctly determining the sentence and phrase structure of free text. In addition to dealing with the vast amount of domain-specific terms, a robust biomedical parser needs to model the semantic grammar to obtain viable syntactic structures. With either a rule-based or corpus-based approach, the grammar engineering process requires substantial time and knowledge from experts, and does not always yield a semantically transferable grammar. To reduce the human effort and to promote semantic transferability, we propose an automated method for deriving a probabilistic grammar based on a training corpus consisting of concept strings and semantic classes from the Unified Medical Language System (UMLS), a comprehensive terminology resource widely used by the community. The grammar is designed to specify noun phrases only due to the nominal nature of the majority of biomedical terminological concepts. Evaluated on manually parsed clinical notes, the derived grammar achieved a recall of 0.644, precision of 0.737, and average cross-bracketing of 0.61, which demonstrated better performance than a control grammar with the semantic information removed. Error analysis revealed shortcomings that could be addressed to improve performance. The results indicated the feasibility of an approach which automatically incorporates terminology semantics in the building of an operational grammar. Although the current performance of the unsupervised solution does not adequately replace manual engineering, we believe once the performance issues are addressed, it could serve as an aide in a semi-supervised solution. PMID:21549857
Automated Inspection of Power Line Corridors to Measure Vegetation Undercut Using Uav-Based Images
NASA Astrophysics Data System (ADS)
Maurer, M.; Hofer, M.; Fraundorfer, F.; Bischof, H.
2017-08-01
Power line corridor inspection is a time consuming task that is performed mostly manually. As the development of UAVs made huge progress in recent years, and photogrammetric computer vision systems became well established, it is time to further automate inspection tasks. In this paper we present an automated processing pipeline to inspect vegetation undercuts of power line corridors. For this, the area of inspection is reconstructed, geo-referenced, semantically segmented and inter class distance measurements are calculated. The presented pipeline performs an automated selection of the proper 3D reconstruction method for on the one hand wiry (power line), and on the other hand solid objects (surrounding). The automated selection is realized by performing pixel-wise semantic segmentation of the input images using a Fully Convolutional Neural Network. Due to the geo-referenced semantic 3D reconstructions a documentation of areas where maintenance work has to be performed is inherently included in the distance measurements and can be extracted easily. We evaluate the influence of the semantic segmentation according to the 3D reconstruction and show that the automated semantic separation in wiry and dense objects of the 3D reconstruction routine improves the quality of the vegetation undercut inspection. We show the generalization of the semantic segmentation to datasets acquired using different acquisition routines and to varied seasons in time.
The Role of Meaning in Past-Tense Inflection
Bandi-Rao, Shoba; Murphy, Gregory L.
2009-01-01
Although English verbs can be either regular (walk-walked) or irregular (sing-sang), “denominal verbs” that are derived from nouns, such as the use of the verb ring derived from the noun a ring, take the regular form even if they are homophonous with an existing irregular verb: The soldiers ringed the city rather than *The soldiers rang the city. Is this regularization due to a semantic difference from the usual verb, or is it due to the application of the default rule, namely VERB + -ed suffix? In Experiment 1, participants rated the semantic similarity of the extended senses of polysemous verbs and denominal verbs to their central senses. Experiment 2 examined the acceptability of the regular and irregular past-tenses of the different verbs. The results showed that all the denominal verbs were rated as more acceptable for the regular inflection than the same verbs used polysemously, even though the two were semantically equally similar to the central meaning. Thus, the derivation of the verb (nominal or verbal) determined the past-tense preference more than semantic variables, consistent with dual-route models of verb inflection. PMID:16839538
The SeaHorn Verification Framework
NASA Technical Reports Server (NTRS)
Gurfinkel, Arie; Kahsai, Temesghen; Komuravelli, Anvesh; Navas, Jorge A.
2015-01-01
In this paper, we present SeaHorn, a software verification framework. The key distinguishing feature of SeaHorn is its modular design that separates the concerns of the syntax of the programming language, its operational semantics, and the verification semantics. SeaHorn encompasses several novelties: it (a) encodes verification conditions using an efficient yet precise inter-procedural technique, (b) provides flexibility in the verification semantics to allow different levels of precision, (c) leverages the state-of-the-art in software model checking and abstract interpretation for verification, and (d) uses Horn-clauses as an intermediate language to represent verification conditions which simplifies interfacing with multiple verification tools based on Horn-clauses. SeaHorn provides users with a powerful verification tool and researchers with an extensible and customizable framework for experimenting with new software verification techniques. The effectiveness and scalability of SeaHorn are demonstrated by an extensive experimental evaluation using benchmarks from SV-COMP 2015 and real avionics code.
Gumz, Antje; Neubauer, Karolin; Horstkotte, Julia Katharina; Geyer, Michael; Löwe, Bernd; Murray, Alexandra M.; Kästner, Denise
2017-01-01
Objective Knowing which specific verbal techniques “good” therapists use in their daily work is important for training and evaluation purposes. In order to systematize what is being practiced in the field, our aim was to empirically identify verbal techniques applied in psychodynamic sessions and to differentiate them according to their basic semantic features using a bottom-up, qualitative approach. Method Mixed-Method-Design: In a comprehensive qualitative study, types of techniques were identified at the level of utterances based on transcribed psychodynamic therapy sessions using Qualitative Content Analysis (4211 utterances). The definitions of the identified categories were successively refined and modified until saturation was achieved. In a subsequent quantitative study, inter-rater reliability was assessed both at the level of utterances (n = 8717) and at the session level (n = 38). The convergent validity of the categories was investigated by analyzing associations with the Interpretive and Supportive Technique Scale (ISTS). Results The inductive approach resulted in a classification system with 37 categories (Psychodynamic Interventions List, PIL). According to their semantic content, the categories can be allocated to three dimensions: form (24 categories), thematic content (9) and temporal focus (4). Most categories showed good or excellent inter-rater reliability and expected associations with the ISTS were predominantly confirmed. The rare use of the residual category “Other” suggests that the identified categories might comprehensively describe the breadth of applied techniques. Conclusions The atheoretical orientation and the clear focus on overt linguistic features should enable the PIL to be used without intensive training or prior theoretical knowledge. The PIL can be used to investigate the links between verbal techniques derived from practice and micro-outcomes (at the session level) as well as the overall therapeutic outcomes. This approach might enable us to determine to what extent the outcome of therapy is due to unintended or non-theoretically relevant techniques. PMID:28837582
Consistency Checking in Hypothesis Generation,
1979-07-01
In a semantic network model of memory (Col l ins and Loftu s , 1975) , concepts are repr~~ented as nodes inter connected by re lational pathw ays...fornia 90007 Defense Dooinentation Center: C~ neron Stat ion , Bldg . 5 Dr. Kenneth I lain~ondAlexandria , Vi rg ir& a 22314 (i~ c>r ;) Institt~to of
The Inter-Temporal Aspect of Well-Being and Societal Progress
ERIC Educational Resources Information Center
Sicherl, Pavle
2007-01-01
The perceptions on well-being and societal progress are influenced also by the quantitative indicators and measures used in the measurement, presentation and semantics of discussing these issues. The article presents a novel generic statistical measure S-time-distance, with clear interpretability that delivers a broader concept to look at data, to…
Lee, Jinseok; McManus, David D; Merchant, Sneh; Chon, Ki H
2012-06-01
We present a real-time method for the detection of motion and noise (MN) artifacts, which frequently interferes with accurate rhythm assessment when ECG signals are collected from Holter monitors. Our MN artifact detection approach involves two stages. The first stage involves the use of the first-order intrinsic mode function (F-IMF) from the empirical mode decomposition to isolate the artifacts' dynamics as they are largely concentrated in the higher frequencies. The second stage of our approach uses three statistical measures on the F-IMF time series to look for characteristics of randomness and variability, which are hallmark signatures of MN artifacts: the Shannon entropy, mean, and variance. We then use the receiver-operator characteristics curve on Holter data from 15 healthy subjects to derive threshold values associated with these statistical measures to separate between the clean and MN artifacts' data segments. With threshold values derived from 15 training data sets, we tested our algorithms on 30 additional healthy subjects. Our results show that our algorithms are able to detect the presence of MN artifacts with sensitivity and specificity of 96.63% and 94.73%, respectively. In addition, when we applied our previously developed algorithm for atrial fibrillation (AF) detection on those segments that have been labeled to be free from MN artifacts, the specificity increased from 73.66% to 85.04% without loss of sensitivity (74.48%-74.62%) on six subjects diagnosed with AF. Finally, the computation time was less than 0.2 s using a MATLAB code, indicating that real-time application of the algorithms is possible for Holter monitoring.
Spatio-Temporal Change Modeling of Lulc: a Semantic Kriging Approach
NASA Astrophysics Data System (ADS)
Bhattacharjee, S.; Ghosh, S. K.
2015-07-01
Spatio-temporal land-use/ land-cover (LULC) change modeling is important to forecast the future LULC distribution, which may facilitate natural resource management, urban planning, etc. The spatio-temporal change in LULC trend often exhibits non-linear behavior, due to various dynamic factors, such as, human intervention (e.g., urbanization), environmental factors, etc. Hence, proper forecasting of LULC distribution should involve the study and trend modeling of historical data. Existing literatures have reported that the meteorological attributes (e.g., NDVI, LST, MSI), are semantically related to the terrain. Being influenced by the terrestrial dynamics, the temporal changes of these attributes depend on the LULC properties. Hence, incorporating meteorological knowledge into the temporal prediction process may help in developing an accurate forecasting model. This work attempts to study the change in inter-annual LULC pattern and the distribution of different meteorological attributes of a region in Kolkata (a metropolitan city in India) during the years 2000-2010 and forecast the future spread of LULC using semantic kriging (SemK) approach. A new variant of time-series SemK is proposed, namely Rev-SemKts to capture the multivariate semantic associations between different attributes. From empirical analysis, it may be observed that the augmentation of semantic knowledge in spatio-temporal modeling of meteorological attributes facilitate more precise forecasting of LULC pattern.
What puts the how in where? Tool use and the divided visual streams hypothesis.
Frey, Scott H
2007-04-01
An influential theory suggests that the dorsal (occipito-parietal) visual stream computes representations of objects for purposes of guiding actions (determining 'how') independently of ventral (occipito-temporal) stream processes supporting object recognition and semantic processing (determining 'what'). Yet, the ability of the dorsal stream alone to account for one of the most common forms of human action, tool use, is limited. While experience-dependent modifications to existing dorsal stream representations may explain simple tool use behaviors (e.g., using sticks to extend reach) found among a variety of species, skillful use of manipulable artifacts (e.g., cups, hammers, pencils) requires in addition access to semantic representations of objects' functions and uses. Functional neuroimaging suggests that this latter information is represented in a left-lateralized network of temporal, frontal and parietal areas. I submit that the well-established dominance of the human left hemisphere in the representation of familiar skills stems from the ability for this acquired knowledge to influence the organization of actions within the dorsal pathway.
Smarter Earth Science Data System
NASA Technical Reports Server (NTRS)
Huang, Thomas
2013-01-01
The explosive growth in Earth observational data in the recent decade demands a better method of interoperability across heterogeneous systems. The Earth science data system community has mastered the art in storing large volume of observational data, but it is still unclear how this traditional method scale over time as we are entering the age of Big Data. Indexed search solutions such as Apache Solr (Smiley and Pugh, 2011) provides fast, scalable search via keyword or phases without any reasoning or inference. The modern search solutions such as Googles Knowledge Graph (Singhal, 2012) and Microsoft Bing, all utilize semantic reasoning to improve its accuracy in searches. The Earth science user community is demanding for an intelligent solution to help them finding the right data for their researches. The Ontological System for Context Artifacts and Resources (OSCAR) (Huang et al., 2012), was created in response to the DARPA Adaptive Vehicle Make (AVM) programs need for an intelligent context models management system to empower its terrain simulation subsystem. The core component of OSCAR is the Environmental Context Ontology (ECO) is built using the Semantic Web for Earth and Environmental Terminology (SWEET) (Raskin and Pan, 2005). This paper presents the current data archival methodology within a NASA Earth science data centers and discuss using semantic web to improve the way we capture and serve data to our users.
Semantic Pattern Analysis for Verbal Fluency Based Assessment of Neurological Disorders
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sukumar, Sreenivas R; Ainsworth, Keela C; Brown, Tyler C
In this paper, we present preliminary results of semantic pattern analysis of verbal fluency tests used for assessing cognitive psychological and neuropsychological disorders. We posit that recent advances in semantic reasoning and artificial intelligence can be combined to create a standardized computer-aided diagnosis tool to automatically evaluate and interpret verbal fluency tests. Towards that goal, we derive novel semantic similarity (phonetic, phonemic and conceptual) metrics and present the predictive capability of these metrics on a de-identified dataset of participants with and without neurological disorders.
A level set method for cupping artifact correction in cone-beam CT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xie, Shipeng; Li, Haibo; Ge, Qi
2015-08-15
Purpose: To reduce cupping artifacts and improve the contrast-to-noise ratio in cone-beam computed tomography (CBCT). Methods: A level set method is proposed to reduce cupping artifacts in the reconstructed image of CBCT. The authors derive a local intensity clustering property of the CBCT image and define a local clustering criterion function of the image intensities in a neighborhood of each point. This criterion function defines an energy in terms of the level set functions, which represent a segmentation result and the cupping artifacts. The cupping artifacts are estimated as a result of minimizing this energy. Results: The cupping artifacts inmore » CBCT are reduced by an average of 90%. The results indicate that the level set-based algorithm is practical and effective for reducing the cupping artifacts and preserving the quality of the reconstructed image. Conclusions: The proposed method focuses on the reconstructed image without requiring any additional physical equipment, is easily implemented, and provides cupping correction through a single-scan acquisition. The experimental results demonstrate that the proposed method successfully reduces the cupping artifacts.« less
Bauer, Patricia J; Blue, Shala N; Xu, Aoxiang; Esposito, Alena G
2016-07-01
We investigated 7- to 10-year-old children's productive extension of semantic memory through self-generation of new factual knowledge derived through integration of separate yet related facts learned through instruction or through reading. In Experiment 1, an experimenter read the to-be-integrated facts. Children successfully learned and integrated the information and used it to further extend their semantic knowledge, as evidenced by high levels of correct responses in open-ended and forced-choice testing. In Experiment 2, on half of the trials, the to-be-integrated facts were read by an experimenter (as in Experiment 1) and on half of the trials, children read the facts themselves. Self-generation performance was high in both conditions (experimenter- and self-read); in both conditions, self-generation of new semantic knowledge was related to an independent measure of children's reading comprehension. In Experiment 3, the way children deployed cognitive resources during reading was predictive of their subsequent recall of newly learned information derived through integration. These findings indicate self-generation of new semantic knowledge through integration in school-age children as well as relations between this productive means of extension of semantic memory and cognitive processes engaged during reading. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Bauer, Patricia J.; Blue, Shala N.; Xu, Aoxiang; Esposito, Alena G.
2016-01-01
We investigated 7- to 10-year-old children’s productive extension of semantic memory through self-generation of new factual knowledge derived through integration of separate yet related facts learned through instruction or through reading. In Experiment 1, an experimenter read the to-be-integrated facts. Children successfully learned and integrated the information and used it to further extend their semantic knowledge, as evidenced by high levels of correct responses in open-ended and forced-choice testing. In Experiment 2, on half of the trials, the to-be-integrated facts were read by an experimenter (as in Experiment 1) and on half of the trials, children read the facts themselves. Self-generation performance was high in both conditions (experimenter- and self-read); in both conditions, self-generation of new semantic knowledge was related to an independent measure of children’s reading comprehension. In Experiment 3, the way children deployed cognitive resources during reading was predictive of their subsequent recall of newly learned information derived through integration. These findings indicate self-generation of new semantic knowledge through integration in school-age children as well as relations between this productive means of extension of semantic memory and cognitive processes engaged during reading. PMID:27253263
A Formal Theory for Modular ERDF Ontologies
NASA Astrophysics Data System (ADS)
Analyti, Anastasia; Antoniou, Grigoris; Damásio, Carlos Viegas
The success of the Semantic Web is impossible without any form of modularity, encapsulation, and access control. In an earlier paper, we extended RDF graphs with weak and strong negation, as well as derivation rules. The ERDF #n-stable model semantics of the extended RDF framework (ERDF) is defined, extending RDF(S) semantics. In this paper, we propose a framework for modular ERDF ontologies, called modular ERDF framework, which enables collaborative reasoning over a set of ERDF ontologies, while support for hidden knowledge is also provided. In particular, the modular ERDF stable model semantics of modular ERDF ontologies is defined, extending the ERDF #n-stable model semantics. Our proposed framework supports local semantics and different points of view, local closed-world and open-world assumptions, and scoped negation-as-failure. Several complexity results are provided.
Folk Etymology (In English and Elsewhere).
ERIC Educational Resources Information Center
Poruciuc, Adrian
Folk etymology is defined as a change in word or phrase form resulting from an incorrect popular idea of its origin or meaning. Irregular phonetic-semantic shifts are produced by inter-language borrowing or by intra-language passage from one period to another. These shifts are more common in periods when there are no, or few, normative factors…
Semantics-Based Interoperability Framework for the Geosciences
NASA Astrophysics Data System (ADS)
Sinha, A.; Malik, Z.; Raskin, R.; Barnes, C.; Fox, P.; McGuinness, D.; Lin, K.
2008-12-01
Interoperability between heterogeneous data, tools and services is required to transform data to knowledge. To meet geoscience-oriented societal challenges such as forcing of climate change induced by volcanic eruptions, we suggest the need to develop semantic interoperability for data, services, and processes. Because such scientific endeavors require integration of multiple data bases associated with global enterprises, implicit semantic-based integration is impossible. Instead, explicit semantics are needed to facilitate interoperability and integration. Although different types of integration models are available (syntactic or semantic) we suggest that semantic interoperability is likely to be the most successful pathway. Clearly, the geoscience community would benefit from utilization of existing XML-based data models, such as GeoSciML, WaterML, etc to rapidly advance semantic interoperability and integration. We recognize that such integration will require a "meanings-based search, reasoning and information brokering", which will be facilitated through inter-ontology relationships (ontologies defined for each discipline). We suggest that Markup languages (MLs) and ontologies can be seen as "data integration facilitators", working at different abstraction levels. Therefore, we propose to use an ontology-based data registration and discovery approach to compliment mark-up languages through semantic data enrichment. Ontologies allow the use of formal and descriptive logic statements which permits expressive query capabilities for data integration through reasoning. We have developed domain ontologies (EPONT) to capture the concept behind data. EPONT ontologies are associated with existing ontologies such as SUMO, DOLCE and SWEET. Although significant efforts have gone into developing data (object) ontologies, we advance the idea of developing semantic frameworks for additional ontologies that deal with processes and services. This evolutionary step will facilitate the integrative capabilities of scientists as we examine the relationships between data and external factors such as processes that may influence our understanding of "why" certain events happen. We emphasize the need to go from analysis of data to concepts related to scientific principles of thermodynamics, kinetics, heat flow, mass transfer, etc. Towards meeting these objectives, we report on a pair of related service engines: DIA (Discovery, integration and analysis), and SEDRE (Semantically-Enabled Data Registration Engine) that utilize ontologies for semantic interoperability and integration.
Workspaces in the Semantic Web
NASA Technical Reports Server (NTRS)
Wolfe, Shawn R.; Keller, RIchard M.
2005-01-01
Due to the recency and relatively limited adoption of Semantic Web technologies. practical issues related to technology scaling have received less attention than foundational issues. Nonetheless, these issues must be addressed if the Semantic Web is to realize its full potential. In particular, we concentrate on the lack of scoping methods that reduce the size of semantic information spaces so they are more efficient to work with and more relevant to an agent's needs. We provide some intuition to motivate the need for such reduced information spaces, called workspaces, give a formal definition, and suggest possible methods of deriving them.
Learning Semantic Tags from Big Data for Clinical Text Representation.
Li, Yanpeng; Liu, Hongfang
2015-01-01
In clinical text mining, it is one of the biggest challenges to represent medical terminologies and n-gram terms in sparse medical reports using either supervised or unsupervised methods. Addressing this issue, we propose a novel method for word and n-gram representation at semantic level. We first represent each word by its distance with a set of reference features calculated by reference distance estimator (RDE) learned from labeled and unlabeled data, and then generate new features using simple techniques of discretization, random sampling and merging. The new features are a set of binary rules that can be interpreted as semantic tags derived from word and n-grams. We show that the new features significantly outperform classical bag-of-words and n-grams in the task of heart disease risk factor extraction in i2b2 2014 challenge. It is promising to see that semantics tags can be used to replace the original text entirely with even better prediction performance as well as derive new rules beyond lexical level.
Building a drug ontology based on RxNorm and other sources
2013-01-01
Background We built the Drug Ontology (DrOn) because we required correct and consistent drug information in a format for use in semantic web applications, and no existing resource met this requirement or could be altered to meet it. One of the obstacles we faced when creating DrOn was the difficulty in reusing drug information from existing sources. The primary external source we have used at this stage in DrOn’s development is RxNorm, a standard drug terminology curated by the National Library of Medicine (NLM). To build DrOn, we (1) mined data from historical releases of RxNorm and (2) mapped many RxNorm entities to Chemical Entities of Biological Interest (ChEBI) classes, pulling relevant information from ChEBI while doing so. Results We built DrOn in a modular fashion to facilitate simpler extension and development of the ontology and to allow reasoning and construction to scale. Classes derived from each source are serialized in separate modules. For example, the classes in DrOn that are programmatically derived from RxNorm are stored in a separate module and subsumed by classes in a manually-curated, realist, upper-level module of DrOn with terms such as 'clinical drug role’, 'tablet’, 'capsule’, etc. Conclusions DrOn is a modular, extensible ontology of drug products, their ingredients, and their biological activity that avoids many of the fundamental flaws found in other, similar artifacts and meets the requirements of our comparative-effectiveness research use-case. PMID:24345026
Building a drug ontology based on RxNorm and other sources.
Hanna, Josh; Joseph, Eric; Brochhausen, Mathias; Hogan, William R
2013-12-18
We built the Drug Ontology (DrOn) because we required correct and consistent drug information in a format for use in semantic web applications, and no existing resource met this requirement or could be altered to meet it. One of the obstacles we faced when creating DrOn was the difficulty in reusing drug information from existing sources. The primary external source we have used at this stage in DrOn's development is RxNorm, a standard drug terminology curated by the National Library of Medicine (NLM). To build DrOn, we (1) mined data from historical releases of RxNorm and (2) mapped many RxNorm entities to Chemical Entities of Biological Interest (ChEBI) classes, pulling relevant information from ChEBI while doing so. We built DrOn in a modular fashion to facilitate simpler extension and development of the ontology and to allow reasoning and construction to scale. Classes derived from each source are serialized in separate modules. For example, the classes in DrOn that are programmatically derived from RxNorm are stored in a separate module and subsumed by classes in a manually-curated, realist, upper-level module of DrOn with terms such as 'clinical drug role', 'tablet', 'capsule', etc. DrOn is a modular, extensible ontology of drug products, their ingredients, and their biological activity that avoids many of the fundamental flaws found in other, similar artifacts and meets the requirements of our comparative-effectiveness research use-case.
Detecting causality from online psychiatric texts using inter-sentential language patterns
2012-01-01
Background Online psychiatric texts are natural language texts expressing depressive problems, published by Internet users via community-based web services such as web forums, message boards and blogs. Understanding the cause-effect relations embedded in these psychiatric texts can provide insight into the authors’ problems, thus increasing the effectiveness of online psychiatric services. Methods Previous studies have proposed the use of word pairs extracted from a set of sentence pairs to identify cause-effect relations between sentences. A word pair is made up of two words, with one coming from the cause text span and the other from the effect text span. Analysis of the relationship between these words can be used to capture individual word associations between cause and effect sentences. For instance, (broke up, life) and (boyfriend, meaningless) are two word pairs extracted from the sentence pair: “I broke up with my boyfriend. Life is now meaningless to me”. The major limitation of word pairs is that individual words in sentences usually cannot reflect the exact meaning of the cause and effect events, and thus may produce semantically incomplete word pairs, as the previous examples show. Therefore, this study proposes the use of inter-sentential language patterns such as ≪broke up, boyfriend>,
A development framework for semantically interoperable health information systems.
Lopez, Diego M; Blobel, Bernd G M E
2009-02-01
Semantic interoperability is a basic challenge to be met for new generations of distributed, communicating and co-operating health information systems (HIS) enabling shared care and e-Health. Analysis, design, implementation and maintenance of such systems and intrinsic architectures have to follow a unified development methodology. The Generic Component Model (GCM) is used as a framework for modeling any system to evaluate and harmonize state of the art architecture development approaches and standards for health information systems as well as to derive a coherent architecture development framework for sustainable, semantically interoperable HIS and their components. The proposed methodology is based on the Rational Unified Process (RUP), taking advantage of its flexibility to be configured for integrating other architectural approaches such as Service-Oriented Architecture (SOA), Model-Driven Architecture (MDA), ISO 10746, and HL7 Development Framework (HDF). Existing architectural approaches have been analyzed, compared and finally harmonized towards an architecture development framework for advanced health information systems. Starting with the requirements for semantic interoperability derived from paradigm changes for health information systems, and supported in formal software process engineering methods, an appropriate development framework for semantically interoperable HIS has been provided. The usability of the framework has been exemplified in a public health scenario.
The semantic pathfinder: using an authoring metaphor for generic multimedia indexing.
Snoek, Cees G M; Worring, Marcel; Geusebroek, Jan-Mark; Koelma, Dennis C; Seinstra, Frank J; Smeulders, Arnold W M
2006-10-01
This paper presents the semantic pathfinder architecture for generic indexing of multimedia archives. The semantic pathfinder extracts semantic concepts from video by exploring different paths through three consecutive analysis steps, which we derive from the observation that produced video is the result of an authoring-driven process. We exploit this authoring metaphor for machine-driven understanding. The pathfinder starts with the content analysis step. In this analysis step, we follow a data-driven approach of indexing semantics. The style analysis step is the second analysis step. Here, we tackle the indexing problem by viewing a video from the perspective of production. Finally, in the context analysis step, we view semantics in context. The virtue of the semantic pathfinder is its ability to learn the best path of analysis steps on a per-concept basis. To show the generality of this novel indexing approach, we develop detectors for a lexicon of 32 concepts and we evaluate the semantic pathfinder against the 2004 NIST TRECVID video retrieval benchmark, using a news archive of 64 hours. Top ranking performance in the semantic concept detection task indicates the merit of the semantic pathfinder for generic indexing of multimedia archives.
Deep-learning derived features for lung nodule classification with limited datasets
NASA Astrophysics Data System (ADS)
Thammasorn, P.; Wu, W.; Pierce, L. A.; Pipavath, S. N.; Lampe, P. D.; Houghton, A. M.; Haynor, D. R.; Chaovalitwongse, W. A.; Kinahan, P. E.
2018-02-01
Only a few percent of indeterminate nodules found in lung CT images are cancer. However, enabling earlier diagnosis is important to avoid invasive procedures or long-time surveillance to those benign nodules. We are evaluating a classification framework using radiomics features derived with a machine learning approach from a small data set of indeterminate CT lung nodule images. We used a retrospective analysis of 194 cases with pulmonary nodules in the CT images with or without contrast enhancement from lung cancer screening clinics. The nodules were contoured by a radiologist and texture features of the lesion were calculated. In addition, sematic features describing shape were categorized. We also explored a Multiband network, a feature derivation path that uses a modified convolutional neural network (CNN) with a Triplet Network. This was trained to create discriminative feature representations useful for variable-sized nodule classification. The diagnostic accuracy was evaluated for multiple machine learning algorithms using texture, shape, and CNN features. In the CT contrast-enhanced group, the texture or semantic shape features yielded an overall diagnostic accuracy of 80%. Use of a standard deep learning network in the framework for feature derivation yielded features that substantially underperformed compared to texture and/or semantic features. However, the proposed Multiband approach of feature derivation produced results similar in diagnostic accuracy to the texture and semantic features. While the Multiband feature derivation approach did not outperform the texture and/or semantic features, its equivalent performance indicates promise for future improvements to increase diagnostic accuracy. Importantly, the Multiband approach adapts readily to different size lesions without interpolation, and performed well with relatively small amount of training data.
Image Corruption Detection in Diffusion Tensor Imaging for Post-Processing and Real-Time Monitoring
Li, Yue; Shea, Steven M.; Lorenz, Christine H.; Jiang, Hangyi; Chou, Ming-Chung; Mori, Susumu
2013-01-01
Due to the high sensitivity of diffusion tensor imaging (DTI) to physiological motion, clinical DTI scans often suffer a significant amount of artifacts. Tensor-fitting-based, post-processing outlier rejection is often used to reduce the influence of motion artifacts. Although it is an effective approach, when there are multiple corrupted data, this method may no longer correctly identify and reject the corrupted data. In this paper, we introduce a new criterion called “corrected Inter-Slice Intensity Discontinuity” (cISID) to detect motion-induced artifacts. We compared the performance of algorithms using cISID and other existing methods with regard to artifact detection. The experimental results show that the integration of cISID into fitting-based methods significantly improves the retrospective detection performance at post-processing analysis. The performance of the cISID criterion, if used alone, was inferior to the fitting-based methods, but cISID could effectively identify severely corrupted images with a rapid calculation time. In the second part of this paper, an outlier rejection scheme was implemented on a scanner for real-time monitoring of image quality and reacquisition of the corrupted data. The real-time monitoring, based on cISID and followed by post-processing, fitting-based outlier rejection, could provide a robust environment for routine DTI studies. PMID:24204551
What is in a contour map? A region-based logical formalization of contour semantics
Usery, E. Lynn; Hahmann, Torsten
2015-01-01
This paper analyses and formalizes contour semantics in a first-order logic ontology that forms the basis for enabling computational common sense reasoning about contour information. The elicited contour semantics comprises four key concepts – contour regions, contour lines, contour values, and contour sets – and their subclasses and associated relations, which are grounded in an existing qualitative spatial ontology. All concepts and relations are illustrated and motivated by physical-geographic features identifiable on topographic contour maps. The encoding of the semantics of contour concepts in first-order logic and a derived conceptual model as basis for an OWL ontology lay the foundation for fully automated, semantically-aware qualitative and quantitative reasoning about contours.
Comparative analysis of semantic localization accuracies between adult and pediatric DICOM CT images
NASA Astrophysics Data System (ADS)
Robertson, Duncan; Pathak, Sayan D.; Criminisi, Antonio; White, Steve; Haynor, David; Chen, Oliver; Siddiqui, Khan
2012-02-01
Existing literature describes a variety of techniques for semantic annotation of DICOM CT images, i.e. the automatic detection and localization of anatomical structures. Semantic annotation facilitates enhanced image navigation, linkage of DICOM image content and non-image clinical data, content-based image retrieval, and image registration. A key challenge for semantic annotation algorithms is inter-patient variability. However, while the algorithms described in published literature have been shown to cope adequately with the variability in test sets comprising adult CT scans, the problem presented by the even greater variability in pediatric anatomy has received very little attention. Most existing semantic annotation algorithms can only be extended to work on scans of both adult and pediatric patients by adapting parameters heuristically in light of patient size. In contrast, our approach, which uses random regression forests ('RRF'), learns an implicit model of scale variation automatically using training data. In consequence, anatomical structures can be localized accurately in both adult and pediatric CT studies without the need for parameter adaptation or additional information about patient scale. We show how the RRF algorithm is able to learn scale invariance from a combined training set containing a mixture of pediatric and adult scans. Resulting localization accuracy for both adult and pediatric data remains comparable with that obtained using RRFs trained and tested using only adult data.
EliXR-TIME: A Temporal Knowledge Representation for Clinical Research Eligibility Criteria.
Boland, Mary Regina; Tu, Samson W; Carini, Simona; Sim, Ida; Weng, Chunhua
2012-01-01
Effective clinical text processing requires accurate extraction and representation of temporal expressions. Multiple temporal information extraction models were developed but a similar need for extracting temporal expressions in eligibility criteria (e.g., for eligibility determination) remains. We identified the temporal knowledge representation requirements of eligibility criteria by reviewing 100 temporal criteria. We developed EliXR-TIME, a frame-based representation designed to support semantic annotation for temporal expressions in eligibility criteria by reusing applicable classes from well-known clinical temporal knowledge representations. We used EliXR-TIME to analyze a training set of 50 new temporal eligibility criteria. We evaluated EliXR-TIME using an additional random sample of 20 eligibility criteria with temporal expressions that have no overlap with the training data, yielding 92.7% (76 / 82) inter-coder agreement on sentence chunking and 72% (72 / 100) agreement on semantic annotation. We conclude that this knowledge representation can facilitate semantic annotation of the temporal expressions in eligibility criteria.
Brunetti, Enzo; Maldonado, Pedro E; Aboitiz, Francisco
2013-01-01
During monitoring of the discourse, the detection of the relevance of incoming lexical information could be critical for its incorporation to update mental representations in memory. Because, in these situations, the relevance for lexical information is defined by abstract rules that are maintained in memory, a central aspect to elucidate is how an abstract level of knowledge maintained in mind mediates the detection of the lower-level semantic information. In the present study, we propose that neuronal oscillations participate in the detection of relevant lexical information, based on "kept in mind" rules deriving from more abstract semantic information. We tested our hypothesis using an experimental paradigm that restricted the detection of relevance to inferences based on explicit information, thus controlling for ambiguities derived from implicit aspects. We used a categorization task, in which the semantic relevance was previously defined based on the congruency between a kept in mind category (abstract knowledge), and the lexical semantic information presented. Our results show that during the detection of the relevant lexical information, phase synchronization of neuronal oscillations selectively increases in delta and theta frequency bands during the interval of semantic analysis. These increments occurred irrespective of the semantic category maintained in memory, had a temporal profile specific for each subject, and were mainly induced, as they had no effect on the evoked mean global field power. Also, recruitment of an increased number of pairs of electrodes was a robust observation during the detection of semantic contingent words. These results are consistent with the notion that the detection of relevant lexical information based on a particular semantic rule, could be mediated by increasing the global phase synchronization of neuronal oscillations, which may contribute to the recruitment of an extended number of cortical regions.
Measuring effectiveness of semantic cues in degraded English sentences in non-native listeners.
Shi, Lu-Feng
2014-01-01
This study employed Boothroyd and Nittrouer's k (1988) to directly quantify effectiveness in native versus non-native listeners' use of semantic cues. Listeners were presented speech-perception-in-noise sentences processed at three levels of concurrent multi-talker babble and reverberation. For each condition, 50 sentences with multiple semantic cues and 50 with minimum semantic cues were randomly presented. Listeners verbally reported and wrote down the target words. The metric, k, was derived from percent-correct scores for sentences with and without semantics. Ten native and 33 non-native listeners participated. The presence of semantics increased recognition benefit by over 250% for natives, but access to semantics remained limited for non-native listeners (90-135%). The k was comparable across conditions for native listeners, but level-dependent for non-natives. The k for non-natives was significantly different from 1 in all conditions, suggesting semantic cues, though reduced in importance in difficult conditions, were helpful for non-natives. Non-natives as a group were not as effective in using semantics to facilitate English sentence recognition as natives. Poor listening conditions were particularly adverse to the use of semantics in non-natives, who may rely on clear acoustic-phonetic cues before benefitting from semantic cues when recognizing connected speech.
Semi-automatic semantic annotation of PubMed Queries: a study on quality, efficiency, satisfaction
Névéol, Aurélie; Islamaj-Doğan, Rezarta; Lu, Zhiyong
2010-01-01
Information processing algorithms require significant amounts of annotated data for training and testing. The availability of such data is often hindered by the complexity and high cost of production. In this paper, we investigate the benefits of a state-of-the-art tool to help with the semantic annotation of a large set of biomedical information queries. Seven annotators were recruited to annotate a set of 10,000 PubMed® queries with 16 biomedical and bibliographic categories. About half of the queries were annotated from scratch, while the other half were automatically pre-annotated and manually corrected. The impact of the automatic pre-annotations was assessed on several aspects of the task: time, number of actions, annotator satisfaction, inter-annotator agreement, quality and number of the resulting annotations. The analysis of annotation results showed that the number of required hand annotations is 28.9% less when using pre-annotated results from automatic tools. As a result, the overall annotation time was substantially lower when pre-annotations were used, while inter-annotator agreement was significantly higher. In addition, there was no statistically significant difference in the semantic distribution or number of annotations produced when pre-annotations were used. The annotated query corpus is freely available to the research community. This study shows that automatic pre-annotations are found helpful by most annotators. Our experience suggests using an automatic tool to assist large-scale manual annotation projects. This helps speed-up the annotation time and improve annotation consistency while maintaining high quality of the final annotations. PMID:21094696
Age-Related Brain Activation Changes during Rule Repetition in Word-Matching.
Methqal, Ikram; Pinsard, Basile; Amiri, Mahnoush; Wilson, Maximiliano A; Monchi, Oury; Provost, Jean-Sebastien; Joanette, Yves
2017-01-01
Objective: The purpose of this study was to explore the age-related brain activation changes during a word-matching semantic-category-based task, which required either repeating or changing a semantic rule to be applied. In order to do so, a word-semantic rule-based task was adapted from the Wisconsin Sorting Card Test, involving the repeated feedback-driven selection of given pairs of words based on semantic category-based criteria. Method: Forty healthy adults (20 younger and 20 older) performed a word-matching task while undergoing a fMRI scan in which they were required to pair a target word with another word from a group of three words. The required pairing is based on three word-pair semantic rules which correspond to different levels of semantic control demands: functional relatedness, moderately typical-relatedness (which were considered as low control demands), and atypical-relatedness (high control demands). The sorting period consisted of a continuous execution of the same sorting rule and an inferred trial-by-trial feedback was given. Results: Behavioral performance revealed increases in response times and decreases of correct responses according to the level of semantic control demands (functional vs. typical vs. atypical) for both age groups (younger and older) reflecting graded differences in the repetition of the application of a given semantic rule. Neuroimaging findings of significant brain activation showed two main results: (1) Greater task-related activation changes for the repetition of the application of atypical rules relative to typical and functional rules, and (2) Changes (older > younger) in the inferior prefrontal regions for functional rules and more extensive and bilateral activations for typical and atypical rules. Regarding the inter-semantic rules comparison, only task-related activation differences were observed for functional > typical (e.g., inferior parietal and temporal regions bilaterally) and atypical > typical (e.g., prefrontal, inferior parietal, posterior temporal, and subcortical regions). Conclusion: These results suggest that healthy cognitive aging relies on the adaptive changes of inferior prefrontal resources involved in the repetitive execution of semantic rules, thus reflecting graded differences in support of task demands.
ERIC Educational Resources Information Center
Roberts, Felicia; Margutti, Piera; Takano, Shoji
2011-01-01
The fact that people with minimal linguistic skill can manage in unfamiliar or reduced linguistic environments suggests that there are universal mechanisms of meaning construction that operate at a level well beyond the particular structure or semantics of any one language. The authors examine this possibility in the domain of discourse by…
Random local temporal structure of category fluency responses.
Meyer, David J; Messer, Jason; Singh, Tanya; Thomas, Peter J; Woyczynski, Wojbor A; Kaye, Jeffrey; Lerner, Alan J
2012-04-01
The Category Fluency Test (CFT) provides a sensitive measurement of cognitive capabilities in humans related to retrieval from semantic memory. In particular, it is widely used to assess progress of cognitive impairment in patients with dementia. Previous research shows that, in the first approximation, the intensity of tested individuals' responses within a standard 60-s test period decays exponentially with time, with faster decay rates for more cognitively impaired patients. Such decay rate can then be viewed as a global (macro) diagnostic parameter of each test. In the present paper we focus on the statistical properties of the properly de-trended time intervals between consecutive responses (inter-call times) in the Category Fluency Test. In a sense, those properties reflect the local (micro) structure of the response generation process. We find that a good approximation for the distribution of the de-trended inter-call times is provided by the Weibull Distribution, a probability distribution that appears naturally in this context as a distribution of a minimum of independent random quantities and is the standard tool in industrial reliability theory. This insight leads us to a new interpretation of the concept of "navigating a semantic space" via patient responses.
A polygon soup representation for free viewpoint video
NASA Astrophysics Data System (ADS)
Colleu, T.; Pateux, S.; Morin, L.; Labit, C.
2010-02-01
This paper presents a polygon soup representation for multiview data. Starting from a sequence of multi-view video plus depth (MVD) data, the proposed representation takes into account, in a unified manner, different issues such as compactness, compression, and intermediate view synthesis. The representation is built in two steps. First, a set of 3D quads is extracted using a quadtree decomposition of the depth maps. Second, a selective elimination of the quads is performed in order to reduce inter-view redundancies and thus provide a compact representation. Moreover, the proposed methodology for extracting the representation allows to reduce ghosting artifacts. Finally, an adapted compression technique is proposed that limits coding artifacts. The results presented on two real sequences show that the proposed representation provides a good trade-off between rendering quality and data compactness.
Koppehele-Gossel, Judith; Schnuerch, Robert; Gibbons, Henning
2018-06-06
This study replicates and extends the findings of Koppehele-Gossel, Schnuerch, and Gibbons (2016) of a posterior semantic asymmetry (PSA) in event-related brain potentials (ERPs), which closely tracks the time course and degree of semantic activation from single visual words. This negativity peaked 300 ms after word onset, was derived by subtracting right- from left-side activity, and was larger in a semantic task compared to two non-semantic control tasks. The validity of the PSA in reflecting the effort to activate word meaning was again attested by a negative correlation between the meaning-specific PSA increase and verbal intelligence, even after controlling for nonverbal intelligence. Extending prior work, current source density (CSD) transformation was used. CSD results were consistent with a left temporo-parietal cortical origin of the PSA. Moreover, no PSA was found for pictorial material, suggesting that the component reflects early semantic processing specific to verbal stimuli. Copyright © 2018 Elsevier Inc. All rights reserved.
A graph-based semantic similarity measure for the gene ontology.
Alvarez, Marco A; Yan, Changhui
2011-12-01
Existing methods for calculating semantic similarities between pairs of Gene Ontology (GO) terms and gene products often rely on external databases like Gene Ontology Annotation (GOA) that annotate gene products using the GO terms. This dependency leads to some limitations in real applications. Here, we present a semantic similarity algorithm (SSA), that relies exclusively on the GO. When calculating the semantic similarity between a pair of input GO terms, SSA takes into account the shortest path between them, the depth of their nearest common ancestor, and a novel similarity score calculated between the definitions of the involved GO terms. In our work, we use SSA to calculate semantic similarities between pairs of proteins by combining pairwise semantic similarities between the GO terms that annotate the involved proteins. The reliability of SSA was evaluated by comparing the resulting semantic similarities between proteins with the functional similarities between proteins derived from expert annotations or sequence similarity. Comparisons with existing state-of-the-art methods showed that SSA is highly competitive with the other methods. SSA provides a reliable measure for semantics similarity independent of external databases of functional-annotation observations.
Integrating Experiential and Distributional Data to Learn Semantic Representations
ERIC Educational Resources Information Center
Andrews, Mark; Vigliocco, Gabriella; Vinson, David
2009-01-01
The authors identify 2 major types of statistical data from which semantic representations can be learned. These are denoted as "experiential data" and "distributional data". Experiential data are derived by way of experience with the physical world and comprise the sensory-motor data obtained through sense receptors. Distributional data, by…
Ji, Xiaonan; Ritter, Alan; Yen, Po-Yin
2017-05-01
Systematic Reviews (SRs) are utilized to summarize evidence from high quality studies and are considered the preferred source of evidence-based practice (EBP). However, conducting SRs can be time and labor intensive due to the high cost of article screening. In previous studies, we demonstrated utilizing established (lexical) article relationships to facilitate the identification of relevant articles in an efficient and effective manner. Here we propose to enhance article relationships with background semantic knowledge derived from Unified Medical Language System (UMLS) concepts and ontologies. We developed a pipelined semantic concepts representation process to represent articles from an SR into an optimized and enriched semantic space of UMLS concepts. Throughout the process, we leveraged concepts and concept relations encoded in biomedical ontologies (SNOMED-CT and MeSH) within the UMLS framework to prompt concept features of each article. Article relationships (similarities) were established and represented as a semantic article network, which was readily applied to assist with the article screening process. We incorporated the concept of active learning to simulate an interactive article recommendation process, and evaluated the performance on 15 completed SRs. We used work saved over sampling at 95% recall (WSS95) as the performance measure. We compared the WSS95 performance of our ontology-based semantic approach to existing lexical feature approaches and corpus-based semantic approaches, and found that we had better WSS95 in most SRs. We also had the highest average WSS95 of 43.81% and the highest total WSS95 of 657.18%. We demonstrated using ontology-based semantics to facilitate the identification of relevant articles for SRs. Effective concepts and concept relations derived from UMLS ontologies can be utilized to establish article semantic relationships. Our approach provided a promising performance and can easily apply to any SR topics in the biomedical domain with generalizability. Copyright © 2017 Elsevier Inc. All rights reserved.
2011-01-01
Background The complexity and inter-related nature of biological data poses a difficult challenge for data and tool integration. There has been a proliferation of interoperability standards and projects over the past decade, none of which has been widely adopted by the bioinformatics community. Recent attempts have focused on the use of semantics to assist integration, and Semantic Web technologies are being welcomed by this community. Description SADI - Semantic Automated Discovery and Integration - is a lightweight set of fully standards-compliant Semantic Web service design patterns that simplify the publication of services of the type commonly found in bioinformatics and other scientific domains. Using Semantic Web technologies at every level of the Web services "stack", SADI services consume and produce instances of OWL Classes following a small number of very straightforward best-practices. In addition, we provide codebases that support these best-practices, and plug-in tools to popular developer and client software that dramatically simplify deployment of services by providers, and the discovery and utilization of those services by their consumers. Conclusions SADI Services are fully compliant with, and utilize only foundational Web standards; are simple to create and maintain for service providers; and can be discovered and utilized in a very intuitive way by biologist end-users. In addition, the SADI design patterns significantly improve the ability of software to automatically discover appropriate services based on user-needs, and automatically chain these into complex analytical workflows. We show that, when resources are exposed through SADI, data compliant with a given ontological model can be automatically gathered, or generated, from these distributed, non-coordinating resources - a behaviour we have not observed in any other Semantic system. Finally, we show that, using SADI, data dynamically generated from Web services can be explored in a manner very similar to data housed in static triple-stores, thus facilitating the intersection of Web services and Semantic Web technologies. PMID:22024447
Wilkinson, Mark D; Vandervalk, Benjamin; McCarthy, Luke
2011-10-24
The complexity and inter-related nature of biological data poses a difficult challenge for data and tool integration. There has been a proliferation of interoperability standards and projects over the past decade, none of which has been widely adopted by the bioinformatics community. Recent attempts have focused on the use of semantics to assist integration, and Semantic Web technologies are being welcomed by this community. SADI - Semantic Automated Discovery and Integration - is a lightweight set of fully standards-compliant Semantic Web service design patterns that simplify the publication of services of the type commonly found in bioinformatics and other scientific domains. Using Semantic Web technologies at every level of the Web services "stack", SADI services consume and produce instances of OWL Classes following a small number of very straightforward best-practices. In addition, we provide codebases that support these best-practices, and plug-in tools to popular developer and client software that dramatically simplify deployment of services by providers, and the discovery and utilization of those services by their consumers. SADI Services are fully compliant with, and utilize only foundational Web standards; are simple to create and maintain for service providers; and can be discovered and utilized in a very intuitive way by biologist end-users. In addition, the SADI design patterns significantly improve the ability of software to automatically discover appropriate services based on user-needs, and automatically chain these into complex analytical workflows. We show that, when resources are exposed through SADI, data compliant with a given ontological model can be automatically gathered, or generated, from these distributed, non-coordinating resources - a behaviour we have not observed in any other Semantic system. Finally, we show that, using SADI, data dynamically generated from Web services can be explored in a manner very similar to data housed in static triple-stores, thus facilitating the intersection of Web services and Semantic Web technologies.
Data Science for Public Policy: Of the people, for the people, by the people 2.0 ?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sukumar, Sreenivas R.; Shankar, Mallikarjun
In this paper, we explore the role of data science in the public policy lifecycle. We posit policy documents (bills, acts, regulations and directives) as forms of social objects and present a methodology to understand interactions between prior context in professional and personal social networks to a given public policy document release. We employ natural language processing tools along with recent advances in semantic reasoning to formulate document-level proximity metrics which we use to predict the relevance (and impact) of the policy artifacts. These metrics serve as a measure of excitation between people and the public policy initiatives.
A filter-mediated communication model for design collaboration in building construction.
Lee, Jaewook; Jeong, Yongwook; Oh, Minho; Hong, Seung Wan
2014-01-01
Multidisciplinary collaboration is an important aspect of modern engineering activities, arising from the growing complexity of artifacts whose design and construction require knowledge and skills that exceed the capacities of any one professional. However, current collaboration in the architecture, engineering, and construction industries often fails due to lack of shared understanding between different participants and limitations of their supporting tools. To achieve a high level of shared understanding, this study proposes a filter-mediated communication model. In the proposed model, participants retain their own data in the form most appropriate for their needs with domain-specific filters that transform the neutral representations into semantically rich ones, as needed by the participants. Conversely, the filters can translate semantically rich, domain-specific data into a neutral representation that can be accessed by other domain-specific filters. To validate the feasibility of the proposed model, we computationally implement the filter mechanism and apply it to a hypothetical test case. The result acknowledges that the filter mechanism can let the participants know ahead of time what will be the implications of their proposed actions, as seen from other participants' points of view.
Gelman, Susan A.
2013-01-01
Psychological essentialism is an intuitive folk belief positing that certain categories have a non-obvious inner “essence” that gives rise to observable features. Although this belief most commonly characterizes natural kind categories, I argue that psychological essentialism can also be extended in important ways to artifact concepts. Specifically, concepts of individual artifacts include the non-obvious feature of object history, which is evident when making judgments regarding authenticity and ownership. Classic examples include famous works of art (e.g., the Mona Lisa is authentic because of its provenance), but ordinary artifacts likewise receive value from their history (e.g., a worn and tattered blanket may have special value if it was one's childhood possession). Moreover, in some cases, object history may be thought to have causal effects on individual artifacts, much as an animal essence has causal effects. I review empirical support for these claims and consider the implications for both artifact concepts and essentialism. This perspective suggests that artifact concepts cannot be contained in a theoretical framework that focuses exclusively on similarity or even function. Furthermore, although there are significant differences between essentialism of natural kinds and essentialism of artifact individuals, the commonalities suggest that psychological essentialism may not derive from folk biology but instead may reflect more domain-general perspectives on the world. PMID:23976903
Real-time eye motion correction in phase-resolved OCT angiography with tracking SLO
Braaf, Boy; Vienola, Kari V.; Sheehy, Christy K.; Yang, Qiang; Vermeer, Koenraad A.; Tiruveedhula, Pavan; Arathorn, David W.; Roorda, Austin; de Boer, Johannes F.
2012-01-01
In phase-resolved OCT angiography blood flow is detected from phase changes in between A-scans that are obtained from the same location. In ophthalmology, this technique is vulnerable to eye motion. We address this problem by combining inter-B-scan phase-resolved OCT angiography with real-time eye tracking. A tracking scanning laser ophthalmoscope (TSLO) at 840 nm provided eye tracking functionality and was combined with a phase-stabilized optical frequency domain imaging (OFDI) system at 1040 nm. Real-time eye tracking corrected eye drift and prevented discontinuity artifacts from (micro)saccadic eye motion in OCT angiograms. This improved the OCT spot stability on the retina and consequently reduced the phase-noise, thereby enabling the detection of slower blood flows by extending the inter-B-scan time interval. In addition, eye tracking enabled the easy compounding of multiple data sets from the fovea of a healthy volunteer to create high-quality eye motion artifact-free angiograms. High-quality images are presented of two distinct layers of vasculature in the retina and the dense vasculature of the choroid. Additionally we present, for the first time, a phase-resolved OCT angiogram of the mesh-like network of the choriocapillaris containing typical pore openings. PMID:23304647
Gudino, Natalia; Duan, Qi; de Zwart, Jacco A; Murphy-Boesch, Joe; Dodd, Stephen J; Merkle, Hellmut; van Gelderen, Peter; Duyn, Jeff H
2015-01-01
Purpose We tested the feasibility of implementing parallel transmission (pTX) for high field MRI using a radiofrequency (RF) amplifier design to be located on or in the immediate vicinity of a RF transmit coil. Method We designed a current-source switch-mode amplifier based on miniaturized, non-magnetic electronics. Optical RF carrier and envelope signals to control the amplifier were derived, through a custom-built interface, from the RF source accessible in the scanner control. Amplifier performance was tested by benchtop measurements as well as with imaging at 7 T (300 MHz) and 11.7 T (500 MHz). The ability to perform pTX was evaluated by measuring inter-channel coupling and phase adjustment in a 2-channel setup. Results The amplifier delivered in excess of 44 W RF power and caused minimal interference with MRI. The interface derived accurate optical control signals with carrier frequencies ranging from 64 to 750 MHz. Decoupling better than 14 dB was obtained between 2 coil loops separated by only 1 cm. Application to MRI was demonstrated by acquiring artifact-free images at 7 T and 11.7 T. Conclusion An optically controlled miniaturized RF amplifier for on-coil implementation at high field is demonstrated that should facilitate implementation of high-density pTX arrays. PMID:26256671
Application of Ontologies for Big Earth Data
NASA Astrophysics Data System (ADS)
Huang, T.; Chang, G.; Armstrong, E. M.; Boening, C.
2014-12-01
Connected data is smarter data! Earth Science research infrastructure must do more than just being able to support temporal, geospatial discovery of satellite data. As the Earth Science data archives continue to expand across NASA data centers, the research communities are demanding smarter data services. A successful research infrastructure must be able to present researchers the complete picture, that is, datasets with linked citations, related interdisciplinary data, imageries, current events, social media discussions, and scientific data tools that are relevant to the particular dataset. The popular Semantic Web for Earth and Environmental Terminology (SWEET) ontologies is a collection of ontologies and concepts designed to improve discovery and application of Earth Science data. The SWEET ontologies collection was initially developed to capture the relationships between keywords in the NASA Global Change Master Directory (GCMD). Over the years this popular ontologies collection has expanded to cover over 200 ontologies and 6000 concepts to enable scalable classification of Earth system science concepts and Space science. This presentation discusses the semantic web technologies as the enabling technology for data-intensive science. We will discuss the application of the SWEET ontologies as a critical component in knowledge-driven research infrastructure for some of the recent projects, which include the DARPA Ontological System for Context Artifact and Resources (OSCAR), 2013 NASA ACCESS Virtual Quality Screening Service (VQSS), and the 2013 NASA Sea Level Change Portal (SLCP) projects. The presentation will also discuss the benefits in using semantic web technologies in developing research infrastructure for Big Earth Science Data in an attempt to "accommodate all domains and provide the necessary glue for information to be cross-linked, correlated, and discovered in a semantically rich manner." [1] [1] Savas Parastatidis: A platform for all that we know: creating a knowledge-driven research infrastructure. The Fourth Paradigm 2009: 165-172
Possible artifacts in inferring seismic properties from X-ray data
NASA Astrophysics Data System (ADS)
Bosak, A.; Krisch, M.; Chumakov, A.; Abrikosov, I. A.; Dubrovinsky, L.
2016-11-01
We consider the experimental and computational artifacts relevant for the extraction of aggregate elastic properties of polycrystalline materials with particular emphasis on the derivation of seismic velocities. We use the case of iron as an example, and show that the improper use of definitions and neglecting the crystalline anisotropy can result in unexpectedly large errors up to a few percent.
Early Decomposition in Visual Word Recognition: Dissociating Morphology, Form, and Meaning
ERIC Educational Resources Information Center
Marslen-Wilson, William D.; Bozic, Mirjana; Randall, Billi
2008-01-01
The role of morphological, semantic, and form-based factors in the early stages of visual word recognition was investigated across different SOAs in a masked priming paradigm, focusing on English derivational morphology. In a first set of experiments, stimulus pairs co-varying in morphological decomposability and in semantic and orthographic…
Semantic and Pragmatic Abilities Can Be Spared in Italian Children with SLI
ERIC Educational Resources Information Center
Arosio, Fabrizio; Foppolo, Francesca; Pagliarini, Elena; Perugini, Maria; Guasti, Maria Teresa
2017-01-01
Specific language impairment (SLI) is a heterogeneous disorder affecting various aspects of language. While most studies have investigated impairments in the domain of syntax and morphosyntax, little is known about compositional semantics and the process of deriving pragmatic meanings in SLI. We selected a group of sixteen monolingual…
SoFoCles: feature filtering for microarray classification based on gene ontology.
Papachristoudis, Georgios; Diplaris, Sotiris; Mitkas, Pericles A
2010-02-01
Marker gene selection has been an important research topic in the classification analysis of gene expression data. Current methods try to reduce the "curse of dimensionality" by using statistical intra-feature set calculations, or classifiers that are based on the given dataset. In this paper, we present SoFoCles, an interactive tool that enables semantic feature filtering in microarray classification problems with the use of external, well-defined knowledge retrieved from the Gene Ontology. The notion of semantic similarity is used to derive genes that are involved in the same biological path during the microarray experiment, by enriching a feature set that has been initially produced with legacy methods. Among its other functionalities, SoFoCles offers a large repository of semantic similarity methods that are used in order to derive feature sets and marker genes. The structure and functionality of the tool are discussed in detail, as well as its ability to improve classification accuracy. Through experimental evaluation, SoFoCles is shown to outperform other classification schemes in terms of classification accuracy in two real datasets using different semantic similarity computation approaches.
Semantic retrieval during overt picture description: Left anterior temporal or the parietal lobe?
Geranmayeh, Fatemeh; Leech, Robert; Wise, Richard J S
2015-09-01
Retrieval of semantic representations is a central process during overt speech production. There is an increasing consensus that an amodal semantic 'hub' must exist that draws together modality-specific representations of concepts. Based on the distribution of atrophy and the behavioral deficit of patients with the semantic variant of fronto-temporal lobar degeneration, it has been proposed that this hub is localized within both anterior temporal lobes (ATL), and is functionally connected with verbal 'output' systems via the left ATL. An alternative view, dating from Geschwind's proposal in 1965, is that the angular gyrus (AG) is central to object-based semantic representations. In this fMRI study we examined the connectivity of the left ATL and parietal lobe (PL) with whole brain networks known to be activated during overt picture description. We decomposed each of these two brain volumes into 15 regions of interest (ROIs), using independent component analysis. A dual regression analysis was used to establish the connectivity of each ROI with whole brain-networks. An ROI within the left anterior superior temporal sulcus (antSTS) was functionally connected to other parts of the left ATL, including anterior ventromedial left temporal cortex (partially attenuated by signal loss due to susceptibility artifact), a large left dorsolateral prefrontal region (including 'classic' Broca's area), extensive bilateral sensory-motor cortices, and the length of both superior temporal gyri. The time-course of this functionally connected network was associated with picture description but not with non-semantic baseline tasks. This system has the distribution expected for the production of overt speech with appropriate semantic content, and the auditory monitoring of the overt speech output. In contrast, the only left PL ROI that showed connectivity with brain systems most strongly activated by the picture-description task, was in the superior parietal lobe (supPL). This region showed connectivity with predominantly posterior cortical regions required for the visual processing of the pictorial stimuli, with additional connectivity to the dorsal left AG and a small component of the left inferior frontal gyrus. None of the other PL ROIs that included part of the left AG were activated by Speech alone. The best interpretation of these results is that the left antSTS connects the proposed semantic hub (specifically localized to ventral anterior temporal cortex based on clinical neuropsychological studies) to posterior frontal regions and sensory-motor cortices responsible for the overt production of speech. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.
Rule-based support system for multiple UMLS semantic type assignments
Geller, James; He, Zhe; Perl, Yehoshua; Morrey, C. Paul; Xu, Julia
2012-01-01
Background When new concepts are inserted into the UMLS, they are assigned one or several semantic types from the UMLS Semantic Network by the UMLS editors. However, not every combination of semantic types is permissible. It was observed that many concepts with rare combinations of semantic types have erroneous semantic type assignments or prohibited combinations of semantic types. The correction of such errors is resource-intensive. Objective We design a computational system to inform UMLS editors as to whether a specific combination of two, three, four, or five semantic types is permissible or prohibited or questionable. Methods We identify a set of inclusion and exclusion instructions in the UMLS Semantic Network documentation and derive corresponding rule-categories as well as rule-categories from the UMLS concept content. We then design an algorithm adviseEditor based on these rule-categories. The algorithm specifies rules for an editor how to proceed when considering a tuple (pair, triple, quadruple, quintuple) of semantic types to be assigned to a concept. Results Eight rule-categories were identified. A Web-based system was developed to implement the adviseEditor algorithm, which returns for an input combination of semantic types whether it is permitted, prohibited or (in a few cases) requires more research. The numbers of semantic type pairs assigned to each rule-category are reported. Interesting examples for each rule-category are illustrated. Cases of semantic type assignments that contradict rules are listed, including recently introduced ones. Conclusion The adviseEditor system implements explicit and implicit knowledge available in the UMLS in a system that informs UMLS editors about the permissibility of a desired combination of semantic types. Using adviseEditor might help accelerate the work of the UMLS editors and prevent erroneous semantic type assignments. PMID:23041716
Abnormal dynamics of language in schizophrenia.
Stephane, Massoud; Kuskowski, Michael; Gundel, Jeanette
2014-05-30
Language could be conceptualized as a dynamic system that includes multiple interactive levels (sub-lexical, lexical, sentence, and discourse) and components (phonology, semantics, and syntax). In schizophrenia, abnormalities are observed at all language elements (levels and components) but the dynamic between these elements remains unclear. We hypothesize that the dynamics between language elements in schizophrenia is abnormal and explore how this dynamic is altered. We, first, investigated language elements with comparable procedures in patients and healthy controls. Second, using measures of reaction time, we performed multiple linear regression analyses to evaluate the inter-relationships among language elements and the effect of group on these relationships. Patients significantly differed from controls with respect to sub-lexical/lexical, lexical/sentence, and sentence/discourse regression coefficients. The intercepts of the regression slopes increased in the same order above (from lower to higher levels) in patients but not in controls. Regression coefficients between syntax and both sentence level and discourse level semantics did not differentiate patients from controls. This study indicates that the dynamics between language elements is abnormal in schizophrenia. In patients, top-down flow of linguistic information might be reduced, and the relationship between phonology and semantics but not between syntax and semantics appears to be altered. Published by Elsevier Ireland Ltd.
Training propositional reasoning.
Klauer, K C; Meiser, T; Naumer, B
2000-08-01
Two experiments compared the effects of four training conditions on propositional reasoning. A syntactic training demonstrated formal derivations, in an abstract semantic training the standard truth-table definitions of logical connectives were explained, and a domain-specific semantic training provided thematic contexts for the premises of the reasoning task. In a control training, an inductive reasoning task was practised. In line with the account by mental models, both kinds of semantic training were significantly more effective than the control and the syntactic training, whereas there were no significant differences between the control and the syntactic training, nor between the two kinds of semantic training. Experiment 2 replicated this pattern of effects using a different set of syntactic and domain-specific training conditions.
Ontology Matching with Semantic Verification.
Jean-Mary, Yves R; Shironoshita, E Patrick; Kabuka, Mansur R
2009-09-01
ASMOV (Automated Semantic Matching of Ontologies with Verification) is a novel algorithm that uses lexical and structural characteristics of two ontologies to iteratively calculate a similarity measure between them, derives an alignment, and then verifies it to ensure that it does not contain semantic inconsistencies. In this paper, we describe the ASMOV algorithm, and then present experimental results that measure its accuracy using the OAEI 2008 tests, and that evaluate its use with two different thesauri: WordNet, and the Unified Medical Language System (UMLS). These results show the increased accuracy obtained by combining lexical, structural and extensional matchers with semantic verification, and demonstrate the advantage of using a domain-specific thesaurus for the alignment of specialized ontologies.
Semantic bifurcated importance field visualization
NASA Astrophysics Data System (ADS)
Lindahl, Eric; Petrov, Plamen
2007-04-01
While there are many good ways to map sensual reality to two dimensional displays, mapping non-physical and possibilistic information can be challenging. The advent of faster-than-real-time systems allow the predictive and possibilistic exploration of important factors that can affect the decision maker. Visualizing a compressed picture of the past and possible factors can assist the decision maker summarizing information in a cognitive based model thereby reducing clutter and perhaps related decision times. Our proposed semantic bifurcated importance field visualization uses saccadic eye motion models to partition the display into a possibilistic and sensed data vertically and spatial and semantic data horizontally. Saccadic eye movement precedes and prepares decision makers before nearly every directed action. Cognitive models for saccadic eye movement show that people prefer lateral to vertical saccadic movement. Studies have suggested that saccades may be coupled to momentary problem solving strategies. Also, the central 1.5 degrees of the visual field represents 100 times greater resolution that then peripheral field so concentrating factors can reduce unnecessary saccades. By packing information according to saccadic models, we can relate important decision factors reduce factor dimensionality and present the dense summary dimensions of semantic and importance. Inter and intra ballistics of the SBIFV provide important clues on how semantic packing assists in decision making. Future directions of SBIFV are to make the visualization reactive and conformal to saccades specializing targets to ballistics, such as dynamically filtering and highlighting verbal targets for left saccades and spatial targets for right saccades.
Enhanced automatic artifact detection based on independent component analysis and Renyi's entropy.
Mammone, Nadia; Morabito, Francesco Carlo
2008-09-01
Artifacts are disturbances that may occur during signal acquisition and may affect their processing. The aim of this paper is to propose a technique for automatically detecting artifacts from the electroencephalographic (EEG) recordings. In particular, a technique based on both Independent Component Analysis (ICA) to extract artifactual signals and on Renyi's entropy to automatically detect them is presented. This technique is compared to the widely known approach based on ICA and the joint use of kurtosis and Shannon's entropy. The novel processing technique is shown to detect on average 92.6% of the artifactual signals against the average 68.7% of the previous technique on the studied available database. Moreover, Renyi's entropy is shown to be able to detect muscle and very low frequency activity as well as to discriminate them from other kinds of artifacts. In order to achieve an efficient rejection of the artifacts while minimizing the information loss, future efforts will be devoted to the improvement of blind artifact separation from EEG in order to ensure a very efficient isolation of the artifactual activity from any signals deriving from other brain tasks.
Cimpian, Andrei; Cadena, Cristina
2010-10-01
Artifacts pose a potential learning problem for children because the mapping between their features and their functions is often not transparent. In solving this problem, children are likely to rely on a number of information sources (e.g., others' actions, affordances). We argue that children's sensitivity to nuances in the language used to describe artifacts is an important, but so far unacknowledged, piece of this puzzle. Specifically, we hypothesize that children are sensitive to whether an unfamiliar artifact's features are highlighted using generic (e.g., "Dunkels are sticky") or non-generic (e.g., "This dunkel is sticky") language. Across two studies, older-but not younger-preschoolers who heard such features introduced via generic statements inferred that they are a functional part of the artifact's design more often than children who heard the same features introduced via non-generic statements. The ability to pick up on this linguistic cue may expand considerably the amount of conceptual information about artifacts that children derive from conversations with adults. Copyright 2010 Elsevier B.V. All rights reserved.
A post-reconstruction method to correct cupping artifacts in cone beam breast computed tomography
Altunbas, M. C.; Shaw, C. C.; Chen, L.; Lai, C.; Liu, X.; Han, T.; Wang, T.
2007-01-01
In cone beam breast computed tomography (CT), scattered radiation leads to nonuniform biasing of CT numbers known as a cupping artifact. Besides being visual distractions, cupping artifacts appear as background nonuniformities, which impair efficient gray scale windowing and pose a problem in threshold based volume visualization/segmentation. To overcome this problem, we have developed a background nonuniformity correction method specifically designed for cone beam breast CT. With this technique, the cupping artifact is modeled as an additive background signal profile in the reconstructed breast images. Due to the largely circularly symmetric shape of a typical breast, the additive background signal profile was also assumed to be circularly symmetric. The radial variation of the background signals were estimated by measuring the spatial variation of adipose tissue signals in front view breast images. To extract adipose tissue signals in an automated manner, a signal sampling scheme in polar coordinates and a background trend fitting algorithm were implemented. The background fits compared with targeted adipose tissue signal value (constant throughout the breast volume) to get an additive correction value for each tissue voxel. To test the accuracy, we applied the technique to cone beam CT images of mastectomy specimens. After correction, the images demonstrated significantly improved signal uniformity in both front and side view slices. The reduction of both intra-slice and inter-slice variations in adipose tissue CT numbers supported our observations. PMID:17822018
Inter-slice Leakage Artifact Reduction Technique for Simultaneous Multi-Slice Acquisitions
Cauley, Stephen F.; Polimeni, Jonathan R.; Bhat, Himanshu; Wang, Dingxin; Wald, Lawrence L.; Setsompop, Kawin
2015-01-01
Purpose Controlled aliasing techniques for simultaneously acquired EPI slices have been shown to significantly increase the temporal efficiency for both diffusion-weighted imaging (DWI) and fMRI studies. The “slice-GRAPPA” (SG) method has been widely used to reconstruct such data. We investigate robust optimization techniques for SG to ensure image reconstruction accuracy through a reduction of leakage artifacts. Methods Split slice-GRAPPA (SP-SG) is proposed as an alternative kernel optimization method. The performance of SP-SG is compared to standard SG using data collected on a spherical phantom and in-vivo on two subjects at 3T. Slice accelerated and non-accelerated data were collected for a spin-echo diffusion weighted acquisition. Signal leakage metrics and time-series SNR were used to quantify the performance of the kernel fitting approaches. Results The SP-SG optimization strategy significantly reduces leakage artifacts for both phantom and in-vivo acquisitions. In addition, a significant boost in time-series SNR for in-vivo diffusion weighted acquisitions with in-plane 2× and slice 3× accelerations was observed with the SP-SG approach. Conclusion By minimizing the influence of leakage artifacts during the training of slice-GRAPPA kernels, we have significantly improved reconstruction accuracy. Our robust kernel fitting strategy should enable better reconstruction accuracy and higher slice-acceleration across many applications. PMID:23963964
Mani, Merry; Jacob, Mathews; Kelley, Douglas; Magnotta, Vincent
2017-01-01
Purpose To introduce a novel method for the recovery of multi-shot diffusion weighted (MS-DW) images from echo-planar imaging (EPI) acquisitions. Methods Current EPI-based MS-DW reconstruction methods rely on the explicit estimation of the motion-induced phase maps to recover artifact-free images. In the new formulation, the k-space data of the artifact-free DWI is recovered using a structured low-rank matrix completion scheme, which does not require explicit estimation of the phase maps. The structured matrix is obtained as the lifting of the multi-shot data. The smooth phase-modulations between shots manifest as null-space vectors of this matrix, which implies that the structured matrix is low-rank. The missing entries of the structured matrix are filled in using a nuclear-norm minimization algorithm subject to the data-consistency. The formulation enables the natural introduction of smoothness regularization, thus enabling implicit motion-compensated recovery of the MS-DW data. Results Our experiments on in-vivo data show effective removal of artifacts arising from inter-shot motion using the proposed method. The method is shown to achieve better reconstruction than the conventional phase-based methods. Conclusion We demonstrate the utility of the proposed method to effectively recover artifact-free images from Cartesian fully/under-sampled and partial Fourier acquired data without the use of explicit phase estimates. PMID:27550212
Enomoto, Yukiko; Yamauchi, Keita; Asano, Takahiko; Otani, Katharina; Iwama, Toru
2018-01-01
Background and purpose C-arm cone-beam computed tomography (CBCT) has the drawback that image quality is degraded by artifacts caused by implanted metal objects. We evaluated whether metal artifact reduction (MAR) prototype software can improve the subjective image quality of CBCT images of patients with intracranial aneurysms treated with coils or clips. Materials and methods Forty-four patients with intracranial aneurysms implanted with coils (40 patients) or clips (four patients) underwent one CBCT scan from which uncorrected and MAR-corrected CBCT image datasets were reconstructed. Three blinded readers evaluated the image quality of the image sets using a four-point scale (1: Excellent, 2: Good, 3: Poor, 4: Bad). The median scores of the three readers of uncorrected and MAR-corrected images were compared with the paired Wilcoxon signed-rank and inter-reader agreement of change scores was assessed by weighted kappa statistics. The readers also recorded new clinical findings, such as intracranial hemorrhage, air, or surrounding anatomical structures on MAR-corrected images. Results The image quality of MAR-corrected CBCT images was significantly improved compared with the uncorrected CBCT image ( p < 0.001). Additional clinical findings were seen on CBCT images of 70.4% of patients after MAR correction. Conclusion MAR software improved image quality of CBCT images degraded by metal artifacts.
Conditional discriminations, symmetry, and semantic priming.
Vaidya, Manish; Hudgins, Caleb D; Ortu, Daniele
2015-09-01
Psychologists interested in the study of symbolic behavior have found that people are faster at reporting that two words are related to one another than they are in reporting that two words are not related - an effect called semantic priming. This phenomenon has largely been documented in the context of natural languages using real words as stimuli. The current study asked whether laboratory-generated stimulus-stimulus relations established between arbitrary geometrical shapes would also show the semantic priming effect. Participants learned six conditional relations using a one-to-many training structure (A1-B1, A1-C1, A1-D1, A2-B2, A2-C2, A2-D2) and demonstrated, via accurate performance on tests of derived symmetry, that the trained stimulus functions had become reversible. In a lexical decision task, subjects also demonstrated a priming effect as they displayed faster reaction times to target stimuli when the prime and target came from the same trained or derived conditional relations, compared to the condition in which the prime and target came from different trained or derived conditional relations. These data suggest that laboratory-generated equivalence relations may serve as useful analogues of symbolic behavior. However, the fact that conditional relations training and symmetry alone were sufficient to produce the effect suggests that semantic priming like effects may be the byproduct of simpler stimulus-stimulus relations. Copyright © 2015 Elsevier B.V. All rights reserved.
Green, Adam E; Kraemer, David J M; Fugelsang, Jonathan A; Gray, Jeremy R; Dunbar, Kevin N
2010-01-01
Solving problems often requires seeing new connections between concepts or events that seemed unrelated at first. Innovative solutions of this kind depend on analogical reasoning, a relational reasoning process that involves mapping similarities between concepts. Brain-based evidence has implicated the frontal pole of the brain as important for analogical mapping. Separately, cognitive research has identified semantic distance as a key characteristic of the kind of analogical mapping that can support innovation (i.e., identifying similarities across greater semantic distance reveals connections that support more innovative solutions and models). However, the neural substrates of semantically distant analogical mapping are not well understood. Here, we used functional magnetic resonance imaging (fMRI) to measure brain activity during an analogical reasoning task, in which we parametrically varied the semantic distance between the items in the analogies. Semantic distance was derived quantitatively from latent semantic analysis. Across 23 participants, activity in an a priori region of interest (ROI) in left frontopolar cortex covaried parametrically with increasing semantic distance, even after removing effects of task difficulty. This ROI was centered on a functional peak that we previously associated with analogical mapping. To our knowledge, these data represent a first empirical characterization of how the brain mediates semantically distant analogical mapping.
Semantic deficits in Spanish-English bilingual children with language impairment.
Sheng, Li; Peña, Elizabeth D; Bedore, Lisa M; Fiestas, Christine E
2012-02-01
To examine the nature and extent of semantic deficits in bilingual children with language impairment (LI). Thirty-seven Spanish-English bilingual children with LI (ranging from age 7;0 [years;months] to 9;10) and 37 typically developing (TD) age-matched peers generated 3 associations to 12 pairs of translation equivalents in English and Spanish. Responses were coded as paradigmatic (e.g., dinner-lunch, cena-desayuno [dinner-breakfast]), syntagmatic (e.g., delicious-pizza, delicioso-frijoles [delicious-beans]), and errors (e.g., wearing-where, vestirse-mal [to get dressed-bad]). A semantic depth score was derived in each language and conceptually by combining children's performance in both languages. The LI group achieved significantly lower semantic depth scores than the TD group after controlling for group differences in vocabulary size. Children showed higher conceptual scores than single-language scores. Both groups showed decreases in semantic depth scores across multiple elicitations. Analyses of individual performances indicated that semantic deficits (1 SD below the TD mean semantic depth score) were manifested in 65% of the children with LI and in 14% of the TD children. School-age bilingual children with and without LI demonstrated spreading activation of semantic networks. Consistent with the literature on monolingual children with LI, sparsely linked semantic networks characterize a considerable proportion of bilingual children with LI.
The Local Geometry of Multiattribute Tradeoff Preferences
McGeachie, Michael; Doyle, Jon
2011-01-01
Existing representations for multiattribute ceteris paribus preference statements have provided useful treatments and clear semantics for qualitative comparisons, but have not provided similarly clear representations or semantics for comparisons involving quantitative tradeoffs. We use directional derivatives and other concepts from elementary differential geometry to interpret conditional multiattribute ceteris paribus preference comparisons that state bounds on quantitative tradeoff ratios. This semantics extends the familiar economic notion of marginal rate of substitution to multiple continuous or discrete attributes. The same geometric concepts also provide means for interpreting statements about the relative importance of different attributes. PMID:21528018
Semantic Relations for Problem-Oriented Medical Records
Uzuner, Ozlem; Mailoa, Jonathan; Ryan, Russell; Sibanda, Tawanda
2010-01-01
Summary Objective We describe semantic relation (SR) classification on medical discharge summaries. We focus on relations targeted to the creation of problem-oriented records. Thus, we define relations that involve the medical problems of patients. Methods and Materials We represent patients’ medical problems with their diseases and symptoms. We study the relations of patients’ problems with each other and with concepts that are identified as tests and treatments. We present an SR classifier that studies a corpus of patient records one sentence at a time. For all pairs of concepts that appear in a sentence, this SR classifier determines the relations between them. In doing so, the SR classifier takes advantage of surface, lexical, and syntactic features and uses these features as input to a support vector machine. We apply our SR classifier to two sets of medical discharge summaries, one obtained from the Beth Israel-Deaconess Medical Center (BIDMC), Boston, MA and the other from Partners Healthcare, Boston, MA. Results On the BIDMC corpus, our SR classifier achieves micro-averaged F-measures that range from 74% to 95% on the various relation types. On the Partners corpus, the micro-averaged F-measures on the various relation types range from 68% to 91%. Our experiments show that lexical features (in particular, tokens that occur between candidate concepts, which we refer to as inter-concept tokens) are very informative for relation classification in medical discharge summaries. Using only the inter-concept tokens in the corpus, our SR classifier can recognize 84% of the relations in the BIDMC corpus and 72% of the relations in the Partners corpus. Conclusion These results are promising for semantic indexing of medical records. They imply that we can take advantage of lexical patterns in discharge summaries for relation classification at a sentence level. PMID:20646918
Kalénine, Solène; Mirman, Daniel; Middleton, Erica L.; Buxbaum, Laurel J.
2012-01-01
The current research aimed at specifying the activation time course of different types of semantic information during object conceptual processing and the effect of context on this time course. We distinguished between thematic and functional knowledge and the specificity of functional similarity. Two experiments were conducted with healthy older adults using eye tracking in a word-to-picture matching task. The time course of gaze fixations was used to assess activation of distractor objects during the identification of manipulable artifact targets (e.g., broom). Distractors were (a) thematically related (e.g., dustpan), (b) related by a specific function (e.g., vacuum cleaner), or (c) related by a general function (e.g., sponge). Growth curve analyses were used to assess competition effects when target words were presented in isolation (Experiment 1) and embedded in contextual sentences of different generality levels (Experiment 2). In the absence of context, there was earlier and shorter lasting activation of thematically related as compared to functionally related objects. The time course difference was more pronounced for general functions than specific functions. When contexts were provided, functional similarities that were congruent with context generality level increased in salience with earlier activation of those objects. Context had little impact on thematic activation time course. These data demonstrate that processing a single manipulable artifact concept implicitly activates thematic and functional knowledge with different time courses and that context speeds activation of context-congruent functional similarity. PMID:22449134
A brain electrical signature of left-lateralized semantic activation from single words.
Koppehele-Gossel, Judith; Schnuerch, Robert; Gibbons, Henning
2016-01-01
Lesion and imaging studies consistently indicate a left-lateralization of semantic language processing in human temporo-parietal cortex. Surprisingly, electrocortical measures, which allow a direct assessment of brain activity and the tracking of cognitive functions with millisecond precision, have not yet been used to capture this hemispheric lateralization, at least with respect to posterior portions of this effect. Using event-related potentials, we employed a simple single-word reading paradigm to compare neural activity during three tasks requiring different degrees of semantic processing. As expected, we were able to derive a simple temporo-parietal left-right asymmetry index peaking around 300ms into word processing that neatly tracks the degree of semantic activation. The validity of this measure in specifically capturing verbal semantic activation was further supported by a significant relation to verbal intelligence. We thus posit that it represents a promising tool to monitor verbal semantic processing in the brain with little technological effort and in a minimal experimental setup. Copyright © 2016 Elsevier Inc. All rights reserved.
Neural Basis of Semantic and Syntactic Interference in Sentence Comprehension
Glaser, Yi G.; Martin, Randi C.; Van Dyke, Julie A.; Hamilton, A. Cris; Tan, Yingying
2013-01-01
According to the cue-based parsing approach (Lewis, Vasishth, & Van Dyke, 2006), sentence comprehension difficulty derives from interference from material that partially matches syntactic and semantic retrieval cues. In a 2 (low vs. high semantic interference) × 2 (low vs. high syntactic interference) fMRI study, greater activation was observed in left BA 44/45 for high versus low syntactic interference conditions following sentences and in BA 45/47 for high versus low semantic interference following comprehension questions. A conjunction analysis showed BA45 associated with both types of interference, while BA47 was associated with only semantic interference. Greater activation was also observed in the left STG in the high interference conditions. Importantly, the results for the LIFG could not be attributed to greater working memory capacity demands for high interference conditions. The results favor a fractionation of LIFG wherein BA45 is associated with post-retrieval selection and BA47 with controlled retrieval of semantic information. PMID:23933471
Body MR Imaging: Artifacts, k-Space, and Solutions
Seethamraju, Ravi T.; Patel, Pritesh; Hahn, Peter F.; Kirsch, John E.; Guimaraes, Alexander R.
2015-01-01
Body magnetic resonance (MR) imaging is challenging because of the complex interaction of multiple factors, including motion arising from respiration and bowel peristalsis, susceptibility effects secondary to bowel gas, and the need to cover a large field of view. The combination of these factors makes body MR imaging more prone to artifacts, compared with imaging of other anatomic regions. Understanding the basic MR physics underlying artifacts is crucial to recognizing the trade-offs involved in mitigating artifacts and improving image quality. Artifacts can be classified into three main groups: (a) artifacts related to magnetic field imperfections, including the static magnetic field, the radiofrequency (RF) field, and gradient fields; (b) artifacts related to motion; and (c) artifacts arising from methods used to sample the MR signal. Static magnetic field homogeneity is essential for many MR techniques, such as fat saturation and balanced steady-state free precession. Susceptibility effects become more pronounced at higher field strengths and can be ameliorated by using spin-echo sequences when possible, increasing the receiver bandwidth, and aligning the phase-encoding gradient with the strongest susceptibility gradients, among other strategies. Nonuniformities in the RF transmit field, including dielectric effects, can be minimized by applying dielectric pads or imaging at lower field strength. Motion artifacts can be overcome through respiratory synchronization, alternative k-space sampling schemes, and parallel imaging. Aliasing and truncation artifacts derive from limitations in digital sampling of the MR signal and can be rectified by adjusting the sampling parameters. Understanding the causes of artifacts and their possible solutions will enable practitioners of body MR imaging to meet the challenges of novel pulse sequence design, parallel imaging, and increasing field strength. ©RSNA, 2015 PMID:26207581
NASA Astrophysics Data System (ADS)
Jechel, Christopher Alexander
In radiotherapy planning, computed tomography (CT) images are used to quantify the electron density of tissues and provide spatial anatomical information. Treatment planning systems use these data to calculate the expected spatial distribution of absorbed dose in a patient. CT imaging is complicated by the presence of metal implants which cause increased image noise, produce artifacts throughout the image and can exceed the available range of CT number values within the implant, perturbing electron density estimates in the image. Furthermore, current dose calculation algorithms do not accurately model radiation transport at metal-tissue interfaces. Combined, these issues adversely affect the accuracy of dose calculations in the vicinity of metal implants. As the number of patients with orthopedic and dental implants grows, so does the need to deliver safe and effective radiotherapy treatments in the presence of implants. The Medical Physics group at the Cancer Centre of Southeastern Ontario and Queen's University has developed a Cobalt-60 CT system that is relatively insensitive to metal artifacts due to the high energy, nearly monoenergetic Cobalt-60 photon beam. Kilovoltage CT (kVCT) images, including images corrected using a commercial metal artifact reduction tool, were compared to Cobalt-60 CT images throughout the treatment planning process, from initial imaging through to dose calculation. An effective metal artifact reduction algorithm was also implemented for the Cobalt-60 CT system. Electron density maps derived from the same kVCT and Cobalt-60 CT images indicated the impact of image artifacts on estimates of photon attenuation for treatment planning applications. Measurements showed that truncation of CT number data in kVCT images produced significant mischaracterization of the electron density of metals. Dose measurements downstream of metal inserts in a water phantom were compared to dose data calculated using CT images from kVCT and Cobalt-60 systems with and without artifact correction. The superior accuracy of electron density data derived from Cobalt-60 images compared to kVCT images produced calculated dose with far better agreement with measured results. These results indicated that dose calculation errors from metal image artifacts are primarily due to misrepresentation of electron density within metals rather than artifacts surrounding the implants.
ERIC Educational Resources Information Center
Cree, George S.; McNorgan, Chris; McRae, Ken
2006-01-01
The authors present data from 2 feature verification experiments designed to determine whether distinctive features have a privileged status in the computation of word meaning. They use an attractor-based connectionist model of semantic memory to derive predictions for the experiments. Contrary to central predictions of the conceptual structure…
The Effects of Semantic Transparency and Base Frequency on the Recognition of English Complex Words
ERIC Educational Resources Information Center
Xu, Joe; Taft, Marcus
2015-01-01
A visual lexical decision task was used to examine the interaction between base frequency (i.e., the cumulative frequencies of morphologically related forms) and semantic transparency for a list of derived words. Linear mixed effects models revealed that high base frequency facilitates the recognition of the complex word (i.e., a "base…
ERIC Educational Resources Information Center
Bauer, Patricia J.; Blue, Shala N.; Xu, Aoxiang; Esposito, Alena G.
2016-01-01
We investigated 7- to 10-year-old children's productive extension of semantic memory through self-generation of new factual knowledge derived through integration of separate yet related facts learned through instruction or through reading. In Experiment 1, an experimenter read the to-be-integrated facts. Children successfully learned and…
Guerra, Ernesto; Knoeferle, Pia
2018-01-01
Existing evidence has shown a processing advantage (or facilitation) when representations derived from a non-linguistic context (spatial proximity depicted by gambling cards moving together) match the semantic content of an ensuing sentence. A match, inspired by conceptual metaphors such as 'similarity is closeness' would, for instance, involve cards moving closer together and the sentence relates similarity between abstract concepts such as war and battle. However, other studies have reported a disadvantage (or interference) for congruence between the semantic content of a sentence and representations of spatial distance derived from this sort of non-linguistic context. In the present article, we investigate the cognitive mechanisms underlying the interaction between the representations of spatial distance and sentence processing. In two eye-tracking experiments, we tested the predictions of a mechanism that considers the competition, activation, and decay of visually and linguistically derived representations as key aspects in determining the qualitative pattern and time course of that interaction. Critical trials presented two playing cards, each showing a written abstract noun; the cards turned around, obscuring the nouns, and moved either farther apart or closer together. Participants then read a sentence expressing either semantic similarity or difference between these two nouns. When instructed to attend to the nouns on the cards (Experiment 1), participants' total reading times revealed interference between spatial distance (e.g., closeness) and semantic relations (similarity) as soon as the sentence explicitly conveyed similarity. But when instructed to attend to the cards (Experiment 2), cards approaching (vs. moving apart) elicited first interference (when similarity was implicit) and then facilitation (when similarity was made explicit) during sentence reading. We discuss these findings in the context of a competition mechanism of interference and facilitation effects.
Yang, Ying; Wang, Jing; Bailer, Cyntia; Cherkassky, Vladimir; Just, Marcel Adam
2017-12-01
This study extended cross-language semantic decoding (based on a concept's fMRI signature) to the decoding of sentences across three different languages (English, Portuguese and Mandarin). A classifier was trained on either the mapping between words and activation patterns in one language or the mappings in two languages (using an equivalent amount of training data), and then tested on its ability to decode the semantic content of a third language. The model trained on two languages was reliably more accurate than a classifier trained on one language for all three pairs of languages. This two-language advantage was selective to abstract concept domains such as social interactions and mental activity. Representational Similarity Analyses (RSA) of the inter-sentence neural similarities resulted in similar clustering of sentences in all the three languages, indicating a shared neural concept space among languages. These findings identify semantic domains that are common across these three languages versus those that are more language or culture-specific. Copyright © 2017 Elsevier Inc. All rights reserved.
Gao, Bo-Cai; Liu, Ming
2013-01-01
Surface reflectance spectra retrieved from remotely sensed hyperspectral imaging data using radiative transfer models often contain residual atmospheric absorption and scattering effects. The reflectance spectra may also contain minor artifacts due to errors in radiometric and spectral calibrations. We have developed a fast smoothing technique for post-processing of retrieved surface reflectance spectra. In the present spectral smoothing technique, model-derived reflectance spectra are first fit using moving filters derived with a cubic spline smoothing algorithm. A common gain curve, which contains minor artifacts in the model-derived reflectance spectra, is then derived. This gain curve is finally applied to all of the reflectance spectra in a scene to obtain the spectrally smoothed surface reflectance spectra. Results from analysis of hyperspectral imaging data collected with the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) data are given. Comparisons between the smoothed spectra and those derived with the empirical line method are also presented. PMID:24129022
Automating Traceability for Generated Software Artifacts
NASA Technical Reports Server (NTRS)
Richardson, Julian; Green, Jeffrey
2004-01-01
Program synthesis automatically derives programs from specifications of their behavior. One advantage of program synthesis, as opposed to manual coding, is that there is a direct link between the specification and the derived program. This link is, however, not very fine-grained: it can be best characterized as Program is-derived- from Specification. When the generated program needs to be understood or modified, more $ne-grained linking is useful. In this paper, we present a novel technique for automatically deriving traceability relations between parts of a specification and parts of the synthesized program. The technique is very lightweight and works -- with varying degrees of success - for any process in which one artifact is automatically derived from another. We illustrate the generality of the technique by applying it to two kinds of automatic generation: synthesis of Kalman Filter programs from speci3cations using the Aut- oFilter program synthesis system, and generation of assembly language programs from C source code using the GCC C compilel: We evaluate the effectiveness of the technique in the latter application.
Flexible and Scalable Data Fusion using Proactive Schemaless Information Services
DOE Office of Scientific and Technical Information (OSTI.GOV)
Widener, Patrick
2014-05-01
Exascale data environments are fast approaching, driven by diverse structured and unstructured data such as system and application telemetry streams, open-source information capture, and on-demand simulation output. Storage costs having plummeted, the question is now one of converting vast stores of data to actionable information. Complicating this problem are the low degrees of awareness across domain boundaries about what potentially useful data may exist, and write-once- read-never issues (data generation/collection rates outpacing data analysis and integration rates). Increasingly, technologists and researchers need to correlate previously unrelated data sources and artifacts to produce fused data views for domain-specific purposes. New toolsmore » and approaches for creating such views from vast amounts of data are vitally important to maintaining research and operational momentum. We propose to research and develop tools and services to assist in the creation, refinement, discovery and reuse of fused data views over large, diverse collections of heterogeneously structured data. We innovate in the following ways. First, we enable and encourage end-users to introduce customized index methods selected for local benefit rather than for global interaction (flexible multi-indexing). We envision rich combinations of such views on application data: views that span backing stores with different semantics, that introduce analytic methods of indexing, and that define multiple views on individual data items. We specifically decline to build a big fused database of everything providing a centralized index over all data, or to export a rigid schema to all comers as in federated query approaches. Second, we proactively advertise these application-specific views so that they may be programmatically reused and extended (data proactivity). Through this mechanism, both changes in state (new data in existing view collected) and changes in structure (new or derived view exists) are made known. Lastly, we embrace found data heterogeneity by coupling multi-indexing to backing stores with appropriate semantics (as opposed to a single store or schema).« less
Flexible and Scalable Data Fusion using Proactive, Schemaless Information Services
DOE Office of Scientific and Technical Information (OSTI.GOV)
Widener, Patrick M.
2014-05-01
Exascale data environments are fast approaching, driven by diverse structured and unstructured data such as system and application telemetry streams, open-source information capture, and on-demand simulation output. Storage costs having plummeted, the question is now one of converting vast stores of data to actionable information. Complicating this problem are the low degrees of awareness across domain boundaries about what potentially useful data may exist, and write-once-read- never issues (data generation/collection rates outpacing data analysis and integration rates). Increasingly, technologists and researchers need to correlate previously unrelated data sources and artifacts to produce fused data views for domain-specific purposes. New toolsmore » and approaches for creating such views from vast amounts of data are vitally important to maintaining research and operational momentum. We propose to research and develop tools and services to assist in the creation, refinement, discovery and reuse of fused data views over large, diverse collections of heterogeneously structured data. We innovate in the following ways. First, we enable and encourage end-users to introduce customized index methods selected for local benefit rather than for global interaction (flexible multi-indexing). We envision rich combinations of such views on application data: views that span backing stores with different semantics, that introduce analytic methods of indexing, and that define multiple views on individual data items. We specifically decline to build a big fused database of everything providing a centralized index over all data, or to export a rigid schema to all comers as in federated query approaches. Second, we proactively advertise these application-specific views so that they may be programmatically reused and extended (data proactivity). Through this mechanism, both changes in state (new data in existing view collected) and changes in structure (new or derived view exists) are made known. Lastly, we embrace found data heterogeneity by coupling multi-indexing to backing stores with appropriate semantics (as opposed to a single store or schema).« less
An Approach to Information Management for AIR7000 with Metadata and Ontologies
2009-10-01
metadata. We then propose an approach based on Semantic Technologies including the Resource Description Framework (RDF) and Upper Ontologies, for the...mandating specific metadata schemas can result in interoperability problems. For example, many standards within the ADO mandate the use of XML for metadata...such problems, we propose an archi- tecture in which different metadata schemes can inter operate. By using RDF (Resource Description Framework ) as a
An Italian battery for the assessment of semantic memory disorders.
Catricalà, Eleonora; Della Rosa, Pasquale A; Ginex, Valeria; Mussetti, Zoe; Plebani, Valentina; Cappa, Stefano F
2013-06-01
We report the construction and standardization of a new comprehensive battery of tests for the assessment of semantic memory disorders. The battery is constructed on a common set of 48 stimuli, belonging to both living and non-living categories, rigidly controlled for several confounding variables, and is based on an empirically derived corpus of semantic features. It includes six tasks, in order to assess semantic memory through different modalities of input and output: two naming tasks, one with colored pictures and the other in response to an oral description, a word-picture matching task, a picture sorting task, a free generation of features task and a sentence verification task. Normative data on 106 Italian subjects pooled across homogenous subgroups for age, sex and education are reported. The new battery allows an in-depth investigation of category-specific disorders and of progressive semantic memory deficits at features level, overcoming some of the limitations of existing tests.
[Knowing without remembering: the contribution of developmental amnesia].
Lebrun-Givois, C; Guillery-Girard, B; Thomas-Anterion, C; Laurent, B
2008-05-01
The organization of episodic and semantic memory is currently debated, and especially the rule of the hippocampus in the functioning of these two systems. Since theories derived from the observation of the famous patient HM, that highlighted the involvement of this structure in these two systems, numerous studies questioned the implication of the hippocampus in learning a new semantic knowledge. Among these studies, we found Vargha-Kadem's cases of developmental amnesia. In spite of their clear hippocampal atrophy and a massive impairment of episodic memory, these children were able to acquire de novo new semantic knowledge. In the present paper, we describe a new case of developmental amnesia characteristic of this syndrome. In conclusion, the whole published data question the implication of the hippocampus in every semantic learning and suggest the existence of a neocortical network, slower and that needs more exposures to semantic stimuli than the hippocampal one, which can supply a massive hippocampal impairment.
A Collection of Features for Semantic Graphs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eliassi-Rad, T; Fodor, I K; Gallagher, B
2007-05-02
Semantic graphs are commonly used to represent data from one or more data sources. Such graphs extend traditional graphs by imposing types on both nodes and links. This type information defines permissible links among specified nodes and can be represented as a graph commonly referred to as an ontology or schema graph. Figure 1 depicts an ontology graph for data from National Association of Securities Dealers. Each node type and link type may also have a list of attributes. To capture the increased complexity of semantic graphs, concepts derived for standard graphs have to be extended. This document explains brieflymore » features commonly used to characterize graphs, and their extensions to semantic graphs. This document is divided into two sections. Section 2 contains the feature descriptions for static graphs. Section 3 extends the features for semantic graphs that vary over time.« less
Basho, Surina; Palmer, Erica D.; Rubio, Miguel A.; Wulfeck, Beverly; Müller, Ralph-Axel
2007-01-01
Verbal fluency is a widely used neuropsychological paradigm. In fMRI implementations, conventional unpaced (self-paced) versions are suboptimal due to uncontrolled timing of responses, and overt responses carry the risk of motion artifact. We investigated the behavioral and neurofunctional effects of response pacing and overt speech in semantic category-driven word generation. Twelve right-handed adults (8 female) ages 21–37 were scanned in four conditions each: Paced-Overt, Paced-Covert, Unpaced-Overt, and Unpaced-Covert. There was no significant difference in the number of exemplars generated between overt versions of the paced and unpaced conditions. Imaging results for category-driven word generation overall showed left-hemispheric activation in inferior frontal cortex, premotor cortex, cingulate gyrus, thalamus, and basal ganglia. Direct comparison of generation modes revealed significantly greater activation for the paced compared to unpaced conditions in right superior temporal, bilateral middle frontal, and bilateral anterior cingulate cortex, including regions associated with sustained attention, motor planning, and response inhibition. Covert (compared to overt) conditions showed significantly greater effects in right parietal and anterior cingulate, as well as left middle temporal and superior frontal regions. We conclude that paced overt paradigms are useful adaptations of conventional semantic fluency in fMRI, given their superiority with regard to control over and monitoring of behavioral responses. However, response pacing is associated with additional non-linguistic effects related to response inhibition, motor preparation, and sustained attention. PMID:17292926
Reilly, Jamie; Harnish, Stacy; Garcia, Amanda; Hung, Jinyi; Rodriguez, Amy D.; Crosson, Bruce
2014-01-01
Embodied cognition offers an approach to word meaning firmly grounded in action and perception. A strong prediction of embodied cognition is that sensorimotor simulation is a necessary component of lexical-semantic representation. One semantic distinction where motor imagery is likely to play a key role involves the representation of manufactured artifacts. Many questions remain with respect to the scope of embodied cognition. One dominant unresolved issue is the extent to which motor enactment is necessary for representing and generating words with high motor salience. We investigated lesion correlates of manipulable relative to non-manipulable name generation (e.g., name a school supply; name a mountain range) in patients with nonfluent aphasia (N=14). Lesion volumes within motor (BA4) and premotor (BA6) cortices were not predictive of category discrepancies. Lesion symptom mapping linked impairment for manipulable objects to polymodal convergence zones and to projections of the left, primary visual cortex specialized for motion perception (MT/V5+). Lesions to motor and premotor cortex were not predictive of manipulability impairment. This lesion correlation is incompatible with an embodied perspective premised on necessity of motor cortex for the enactment and subsequent production of motor-related words. These findings instead support a graded or ‘soft’ approach to embodied cognition premised on an ancillary role of modality-specific cortical regions in enriching modality-neutral representations. We discuss a dynamic, hybrid approach to the neurobiology of semantic memory integrating both embodied and disembodied components. PMID:24839997
Accounting Artifacts in High-Throughput Toxicity Assays.
Hsieh, Jui-Hua
2016-01-01
Compound activity identification is the primary goal in high-throughput screening (HTS) assays. However, assay artifacts including both systematic (e.g., compound auto-fluorescence) and nonsystematic (e.g., noise) complicate activity interpretation. In addition, other than the traditional potency parameter, half-maximal effect concentration (EC50), additional activity parameters (e.g., point-of-departure, POD) could be derived from HTS data for activity profiling. A data analysis pipeline has been developed to handle the artifacts and to provide compound activity characterization with either binary or continuous metrics. This chapter outlines the steps in the pipeline using Tox21 glucocorticoid receptor (GR) β-lactamase assays, including the formats to identify either agonists or antagonists, as well as the counter-screen assays for identifying artifacts as examples. The steps can be applied to other lower-throughput assays with concentration-response data.
Multi-Class Motor Imagery EEG Decoding for Brain-Computer Interfaces
Wang, Deng; Miao, Duoqian; Blohm, Gunnar
2012-01-01
Recent studies show that scalp electroencephalography (EEG) as a non-invasive interface has great potential for brain-computer interfaces (BCIs). However, one factor that has limited practical applications for EEG-based BCI so far is the difficulty to decode brain signals in a reliable and efficient way. This paper proposes a new robust processing framework for decoding of multi-class motor imagery (MI) that is based on five main processing steps. (i) Raw EEG segmentation without the need of visual artifact inspection. (ii) Considering that EEG recordings are often contaminated not just by electrooculography (EOG) but also other types of artifacts, we propose to first implement an automatic artifact correction method that combines regression analysis with independent component analysis for recovering the original source signals. (iii) The significant difference between frequency components based on event-related (de-) synchronization and sample entropy is then used to find non-contiguous discriminating rhythms. After spectral filtering using the discriminating rhythms, a channel selection algorithm is used to select only relevant channels. (iv) Feature vectors are extracted based on the inter-class diversity and time-varying dynamic characteristics of the signals. (v) Finally, a support vector machine is employed for four-class classification. We tested our proposed algorithm on experimental data that was obtained from dataset 2a of BCI competition IV (2008). The overall four-class kappa values (between 0.41 and 0.80) were comparable to other models but without requiring any artifact-contaminated trial removal. The performance showed that multi-class MI tasks can be reliably discriminated using artifact-contaminated EEG recordings from a few channels. This may be a promising avenue for online robust EEG-based BCI applications. PMID:23087607
ERIC Educational Resources Information Center
Amenta, Simona; Marelli, Marco; Crepaldi, Davide
2015-01-01
In this eye-tracking study, we investigated how semantics inform morphological analysis at the early stages of visual word identification in sentence reading. We exploited a feature of several derived Italian words, that is, that they can be read in a "morphologically transparent" way or in a "morphologically opaque" way…
ERIC Educational Resources Information Center
Deutsch, Avital
2016-01-01
In the present study we investigated to what extent the morphological facilitation effect induced by the derivational root morpheme in Hebrew is independent of semantic meaning and grammatical information of the part of speech involved. Using the picture-word interference paradigm with auditorily presented distractors, Experiment 1 compared the…
Johns, Brendan T; Taler, Vanessa; Pisoni, David B; Farlow, Martin R; Hake, Ann Marie; Kareken, David A; Unverzagt, Frederick W; Jones, Michael N
2018-06-01
Mild cognitive impairment (MCI) is characterised by subjective and objective memory impairment in the absence of dementia. MCI is a strong predictor for the development of Alzheimer's disease, and may represent an early stage in the disease course in many cases. A standard task used in the diagnosis of MCI is verbal fluency, where participants produce as many items from a specific category (e.g., animals) as possible. Verbal fluency performance is typically analysed by counting the number of items produced. However, analysis of the semantic path of the items produced can provide valuable additional information. We introduce a cognitive model that uses multiple types of lexical information in conjunction with a standard memory search process. The model used a semantic representation derived from a standard semantic space model in conjunction with a memory searching mechanism derived from the Luce choice rule (Luce, 1977). The model was able to detect differences in the memory searching process of patients who were developing MCI, suggesting that the formal analysis of verbal fluency data is a promising avenue to examine the underlying changes occurring in the development of cognitive impairment. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Gransier, Robin; Deprez, Hanne; Hofmann, Michael; Moonen, Marc; van Wieringen, Astrid; Wouters, Jan
2016-05-01
Previous studies have shown that objective measures based on stimulation with low-rate pulse trains fail to predict the threshold levels of cochlear implant (CI) users for high-rate pulse trains, as used in clinical devices. Electrically evoked auditory steady-state responses (EASSRs) can be elicited by modulated high-rate pulse trains, and can potentially be used to objectively determine threshold levels of CI users. The responsiveness of the auditory pathway of profoundly hearing-impaired CI users to modulation frequencies is, however, not known. In the present study we investigated the responsiveness of the auditory pathway of CI users to a monopolar 500 pulses per second (pps) pulse train modulated between 1 and 100 Hz. EASSRs to forty-three modulation frequencies, elicited at the subject's maximum comfort level, were recorded by means of electroencephalography. Stimulation artifacts were removed by a linear interpolation between a pre- and post-stimulus sample (i.e., blanking). The phase delay across modulation frequencies was used to differentiate between the neural response and a possible residual stimulation artifact after blanking. Stimulation artifacts were longer than the inter-pulse interval of the 500pps pulse train for recording electrodes ipsilateral to the CI. As a result the stimulation artifacts could not be removed by artifact removal on the bases of linear interpolation for recording electrodes ipsilateral to the CI. However, artifact-free responses could be obtained in all subjects from recording electrodes contralateral to the CI, when subject specific reference electrodes (Cz or Fpz) were used. EASSRs to modulation frequencies within the 30-50 Hz range resulted in significant responses in all subjects. Only a small number of significant responses could be obtained, during a measurement period of 5 min, that originate from the brain stem (i.e., modulation frequencies in the 80-100 Hz range). This reduced synchronized activity of brain stem responses in long-term severely-hearing impaired CI users could be an attribute of processes associated with long-term hearing impairment and/or electrical stimulation. Copyright © 2016 Elsevier B.V. All rights reserved.
Denoising Algorithm for CFA Image Sensors Considering Inter-Channel Correlation.
Lee, Min Seok; Park, Sang Wook; Kang, Moon Gi
2017-05-28
In this paper, a spatio-spectral-temporal filter considering an inter-channel correlation is proposed for the denoising of a color filter array (CFA) sequence acquired by CCD/CMOS image sensors. Owing to the alternating under-sampled grid of the CFA pattern, the inter-channel correlation must be considered in the direct denoising process. The proposed filter is applied in the spatial, spectral, and temporal domain, considering the spatio-tempo-spectral correlation. First, nonlocal means (NLM) spatial filtering with patch-based difference (PBD) refinement is performed by considering both the intra-channel correlation and inter-channel correlation to overcome the spatial resolution degradation occurring with the alternating under-sampled pattern. Second, a motion-compensated temporal filter that employs inter-channel correlated motion estimation and compensation is proposed to remove the noise in the temporal domain. Then, a motion adaptive detection value controls the ratio of the spatial filter and the temporal filter. The denoised CFA sequence can thus be obtained without motion artifacts. Experimental results for both simulated and real CFA sequences are presented with visual and numerical comparisons to several state-of-the-art denoising methods combined with a demosaicing method. Experimental results confirmed that the proposed frameworks outperformed the other techniques in terms of the objective criteria and subjective visual perception in CFA sequences.
Epistemologies in the Text of Children's Books: Native- and non-Native-authored books
NASA Astrophysics Data System (ADS)
Dehghani, Morteza; Bang, Megan; Medin, Douglas; Marin, Ananda; Leddon, Erin; Waxman, Sandra
2013-09-01
An examination of artifacts provides insights into the goals, practices, and orientations of the persons and cultures who created them. Here, we analyze storybook texts, artifacts that are a part of many children's lives. We examine the stories in books targeted for 4-8-year-old children, contrasting the texts generated by Native American authors versus popular non-Native authors. We focus specifically on the implicit and explicit 'epistemological orientations' associated with relations between human beings and the rest of nature. Native authors were significantly more likely than non-Native authors to describe humans and the rest of nature as psychologically close and embedded in relationships. This pattern converges well with evidence from a behavioral task in which we probed Native (from urban inter-tribal and rural communities) and non-Native children's and adults' attention to ecological relations. We discuss the implications of these differences for environmental cognition and science learning.
Jared, Debra; Jouravlev, Olessia; Joanisse, Marc F
2017-03-01
Decomposition theories of morphological processing in visual word recognition posit an early morpho-orthographic parser that is blind to semantic information, whereas parallel distributed processing (PDP) theories assume that the transparency of orthographic-semantic relationships influences processing from the beginning. To test these alternatives, the performance of participants on transparent (foolish), quasi-transparent (bookish), opaque (vanish), and orthographic control words (bucket) was examined in a series of 5 experiments. In Experiments 1-3 variants of a masked priming lexical-decision task were used; Experiment 4 used a masked priming semantic decision task, and Experiment 5 used a single-word (nonpriming) semantic decision task with a color-boundary manipulation. In addition to the behavioral data, event-related potential (ERP) data were collected in Experiments 1, 2, 4, and 5. Across all experiments, we observed a graded effect of semantic transparency in behavioral and ERP data, with the largest effect for semantically transparent words, the next largest for quasi-transparent words, and the smallest for opaque words. The results are discussed in terms of decomposition versus PDP approaches to morphological processing. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Statistics and dynamics of attractor networks with inter-correlated patterns
NASA Astrophysics Data System (ADS)
Kropff, E.
2007-02-01
In an embodied feature representation view, the semantic memory represents concepts in the brain by the associated activation of the features that describe it, each one of them processed in a differentiated region of the cortex. This system has been modeled with a Potts attractor network. Several studies of feature representation show that the correlation between patterns plays a crucial role in semantic memory. The present work focuses on two aspects of the effect of correlations in attractor networks. In first place, it assesses how a Potts network can store a set of patterns with non-trivial correlations between them. This is done through a simple and biologically plausible modification to the classical learning rule. In second place, it studies the complexity of latching transitions between attractor states, and how this complexity can be controlled.
Weakly supervised image semantic segmentation based on clustering superpixels
NASA Astrophysics Data System (ADS)
Yan, Xiong; Liu, Xiaohua
2018-04-01
In this paper, we propose an image semantic segmentation model which is trained from image-level labeled images. The proposed model starts with superpixel segmenting, and features of the superpixels are extracted by trained CNN. We introduce a superpixel-based graph followed by applying the graph partition method to group correlated superpixels into clusters. For the acquisition of inter-label correlations between the image-level labels in dataset, we not only utilize label co-occurrence statistics but also exploit visual contextual cues simultaneously. At last, we formulate the task of mapping appropriate image-level labels to the detected clusters as a problem of convex minimization. Experimental results on MSRC-21 dataset and LableMe dataset show that the proposed method has a better performance than most of the weakly supervised methods and is even comparable to fully supervised methods.
Bakken, Suzanne; Cimino, James J.; Haskell, Robert; Kukafka, Rita; Matsumoto, Cindi; Chan, Garrett K.; Huff, Stanley M.
2000-01-01
Objective: The purpose of this study was to test the adequacy of the Clinical LOINC (Logical Observation Identifiers, Names, and Codes) semantic structure as a terminology model for standardized assessment measures. Methods: After extension of the definitions, 1,096 items from 35 standardized assessment instruments were dissected into the elements of the Clinical LOINC semantic structure. An additional coder dissected at least one randomly selected item from each instrument. When multiple scale types occurred in a single instrument, a second coder dissected one randomly selected item representative of each scale type. Results: The results support the adequacy of the Clinical LOINC semantic structure as a terminology model for standardized assessments. Using the revised definitions, the coders were able to dissect into the elements of Clinical LOINC all the standardized assessment items in the sample instruments. Percentage agreement for each element was as follows: component, 100 percent; property, 87.8 percent; timing, 82.9 percent; system/sample, 100 percent; scale, 92.6 percent; and method, 97.6 percent. Discussion: This evaluation was an initial step toward the representation of standardized assessment items in a manner that facilitates data sharing and re-use. Further clarification of the definitions, especially those related to time and property, is required to improve inter-rater reliability and to harmonize the representations with similar items already in LOINC. PMID:11062226
NASA Astrophysics Data System (ADS)
Banda, Gourinath; Gallagher, John P.
interpretation provides a practical approach to verifying properties of infinite-state systems. We apply the framework of abstract interpretation to derive an abstract semantic function for the modal μ-calculus, which is the basis for abstract model checking. The abstract semantic function is constructed directly from the standard concrete semantics together with a Galois connection between the concrete state-space and an abstract domain. There is no need for mixed or modal transition systems to abstract arbitrary temporal properties, as in previous work in the area of abstract model checking. Using the modal μ-calculus to implement CTL, the abstract semantics gives an over-approximation of the set of states in which an arbitrary CTL formula holds. Then we show that this leads directly to an effective implementation of an abstract model checking algorithm for CTL using abstract domains based on linear constraints. The implementation of the abstract semantic function makes use of an SMT solver. We describe an implemented system for proving properties of linear hybrid automata and give some experimental results.
The evaluation of sources of knowledge underlying different conceptual categories.
Gainotti, Guido; Spinelli, Pietro; Scaricamazza, Eugenia; Marra, Camillo
2013-01-01
According to the "embodied cognition" theory and the "sensory-motor model of semantic knowledge": (a) concepts are represented in the brain in the same format in which they are constructed by the sensory-motor system and (b) various conceptual categories differ according to the weight of different kinds of information in their representation. In this study, we tried to check the second assumption by asking normal elderly subjects to subjectively evaluate the role of various perceptual, motor and language-mediated sources of knowledge in the construction of different semantic categories. Our first aim was to rate the influence of different sources of knowledge in the representation of animals, plant life and artifact categories, rather than in living and non-living beings, as many previous studies on this subject have done. We also tried to check the influence of age and stimulus modality on these evaluations of the "sources of knowledge" underlying different conceptual categories. The influence of age was checked by comparing results obtained in our group of elderly subjects with those obtained in a previous study, conducted with a similar methodology on a sample of young students. And the influence of stimulus modality was assessed by presenting the stimuli in the verbal modality to 50 subjects and in the pictorial modality to 50 other subjects. The distinction between "animals" and "plant life" in the "living" categories was confirmed by analyzing their prevalent sources of knowledge and by a cluster analysis, which allowed us to distinguish "plant life" items from animals. Furthermore, results of the study showed: (a) that our subjects considered the visual modality as the main source of knowledge for all categories taken into account; and (b) that in biological categories the next most important source of information was represented by other perceptual modalities, whereas in artifacts it was represented by the actions performed with them. Finally, age and stimulus modality did not significantly influence judgment of relevance of the sources of knowledge involved in the construction of different conceptual categories.
A Filter-Mediated Communication Model for Design Collaboration in Building Construction
Oh, Minho
2014-01-01
Multidisciplinary collaboration is an important aspect of modern engineering activities, arising from the growing complexity of artifacts whose design and construction require knowledge and skills that exceed the capacities of any one professional. However, current collaboration in the architecture, engineering, and construction industries often fails due to lack of shared understanding between different participants and limitations of their supporting tools. To achieve a high level of shared understanding, this study proposes a filter-mediated communication model. In the proposed model, participants retain their own data in the form most appropriate for their needs with domain-specific filters that transform the neutral representations into semantically rich ones, as needed by the participants. Conversely, the filters can translate semantically rich, domain-specific data into a neutral representation that can be accessed by other domain-specific filters. To validate the feasibility of the proposed model, we computationally implement the filter mechanism and apply it to a hypothetical test case. The result acknowledges that the filter mechanism can let the participants know ahead of time what will be the implications of their proposed actions, as seen from other participants' points of view. PMID:25309958
Sadeghi, Zahra; McClelland, James L; Hoffman, Paul
2015-09-01
An influential position in lexical semantics holds that semantic representations for words can be derived through analysis of patterns of lexical co-occurrence in large language corpora. Firth (1957) famously summarised this principle as "you shall know a word by the company it keeps". We explored whether the same principle could be applied to non-verbal patterns of object co-occurrence in natural scenes. We performed latent semantic analysis (LSA) on a set of photographed scenes in which all of the objects present had been manually labelled. This resulted in a representation of objects in a high-dimensional space in which similarity between two objects indicated the degree to which they appeared in similar scenes. These representations revealed similarities among objects belonging to the same taxonomic category (e.g., items of clothing) as well as cross-category associations (e.g., between fruits and kitchen utensils). We also compared representations generated from this scene dataset with two established methods for elucidating semantic representations: (a) a published database of semantic features generated verbally by participants and (b) LSA applied to a linguistic corpus in the usual fashion. Statistical comparisons of the three methods indicated significant association between the structures revealed by each method, with the scene dataset displaying greater convergence with feature-based representations than did LSA applied to linguistic data. The results indicate that information about the conceptual significance of objects can be extracted from their patterns of co-occurrence in natural environments, opening the possibility for such data to be incorporated into existing models of conceptual representation. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.
PET attenuation correction for flexible MRI surface coils in hybrid PET/MRI using a 3D depth camera
NASA Astrophysics Data System (ADS)
Frohwein, Lynn J.; Heß, Mirco; Schlicher, Dominik; Bolwin, Konstantin; Büther, Florian; Jiang, Xiaoyi; Schäfers, Klaus P.
2018-01-01
PET attenuation correction for flexible MRI radio frequency surface coils in hybrid PET/MRI is still a challenging task, as position and shape of these coils conform to large inter-patient variabilities. The purpose of this feasibility study is to develop a novel method for the incorporation of attenuation information about flexible surface coils in PET reconstruction using the Microsoft Kinect V2 depth camera. The depth information is used to determine a dense point cloud of the coil’s surface representing the shape of the coil. From a CT template—acquired once in advance—surface information of the coil is extracted likewise and converted into a point cloud. The two point clouds are then registered using a combination of an iterative-closest-point (ICP) method and a partially rigid registration step. Using the transformation derived through the point clouds, the CT template is warped and thereby adapted to the PET/MRI scan setup. The transformed CT template is converted into an attenuation map from Hounsfield units into linear attenuation coefficients. The resulting fitted attenuation map is then integrated into the MRI-based patient-specific DIXON-based attenuation map of the actual PET/MRI scan. A reconstruction of phantom PET data acquired with the coil present in the field-of-view (FoV), but without the corresponding coil attenuation map, shows large artifacts in regions close to the coil. The overall count loss is determined to be around 13% compared to a PET scan without the coil present in the FoV. A reconstruction using the new μ-map resulted in strongly reduced artifacts as well as increased overall PET intensities with a remaining relative difference of about 1% to a PET scan without the coil in the FoV.
ERIC Educational Resources Information Center
Oppenheim, Gary M.; Dell, Gary S.; Schwartz, Myrna F.
2010-01-01
Naming a picture of a dog primes the subsequent naming of a picture of a dog (repetition priming) and interferes with the subsequent naming of a picture of a cat (semantic interference). Behavioral studies suggest that these effects derive from persistent changes in the way that words are activated and selected for production, and some have…
Deep learning methods for CT image-domain metal artifact reduction
NASA Astrophysics Data System (ADS)
Gjesteby, Lars; Yang, Qingsong; Xi, Yan; Shan, Hongming; Claus, Bernhard; Jin, Yannan; De Man, Bruno; Wang, Ge
2017-09-01
Artifacts resulting from metal objects have been a persistent problem in CT images over the last four decades. A common approach to overcome their effects is to replace corrupt projection data with values synthesized from an interpolation scheme or by reprojection of a prior image. State-of-the-art correction methods, such as the interpolation- and normalization-based algorithm NMAR, often do not produce clinically satisfactory results. Residual image artifacts remain in challenging cases and even new artifacts can be introduced by the interpolation scheme. Metal artifacts continue to be a major impediment, particularly in radiation and proton therapy planning as well as orthopedic imaging. A new solution to the long-standing metal artifact reduction (MAR) problem is deep learning, which has been successfully applied to medical image processing and analysis tasks. In this study, we combine a convolutional neural network (CNN) with the state-of-the-art NMAR algorithm to reduce metal streaks in critical image regions. Training data was synthesized from CT simulation scans of a phantom derived from real patient images. The CNN is able to map metal-corrupted images to artifact-free monoenergetic images to achieve additional correction on top of NMAR for improved image quality. Our results indicate that deep learning is a novel tool to address CT reconstruction challenges, and may enable more accurate tumor volume estimation for radiation therapy planning.
The role of action representations in thematic object relations
Tsagkaridis, Konstantinos; Watson, Christine E.; Jax, Steven A.; Buxbaum, Laurel J.
2014-01-01
A number of studies have explored the role of associative/event-based (thematic) and categorical (taxonomic) relations in the organization of object representations. Recent evidence suggests that thematic information may be particularly important in determining relationships between manipulable artifacts. However, although sensorimotor information is on many accounts an important component of manipulable artifact representations, little is known about the role that action may play during the processing of semantic relationships (particularly thematic relationships) between multiple objects. In this study, we assessed healthy and left hemisphere stroke participants to explore three questions relevant to object relationship processing. First, we assessed whether participants tended to favor thematic relations including action (Th+A, e.g., wine bottle—corkscrew), thematic relationships without action (Th-A, e.g., wine bottle—cheese), or taxonomic relationships (Tax, e.g., wine bottle—water bottle) when choosing between them in an association judgment task with manipulable artifacts. Second, we assessed whether the underlying constructs of event relatedness, action relatedness, and categorical relatedness determined the choices that participants made. Third, we assessed the hypothesis that degraded action knowledge and/or damage to temporo-parietal cortex, a region of the brain associated with the representation of action knowledge, would reduce the influence of action on the choice task. Experiment 1 showed that explicit ratings of event, action, and categorical relatedness were differentially predictive of healthy participants' choices, with action relatedness determining choices between Th+A and Th-A associations above and beyond event and categorical ratings. Experiment 2 focused more specifically on these Th+A vs. Th-A choices and demonstrated that participants with left temporo-parietal lesions, a brain region known to be involved in sensorimotor processing, were less likely than controls and tended to be less likely than patients with lesions sparing that region to use action relatedness in determining their choices. These data indicate that action knowledge plays a critical role in processing of thematic relations for manipulable artifacts. PMID:24672461
The role of action representations in thematic object relations.
Tsagkaridis, Konstantinos; Watson, Christine E; Jax, Steven A; Buxbaum, Laurel J
2014-01-01
A number of studies have explored the role of associative/event-based (thematic) and categorical (taxonomic) relations in the organization of object representations. Recent evidence suggests that thematic information may be particularly important in determining relationships between manipulable artifacts. However, although sensorimotor information is on many accounts an important component of manipulable artifact representations, little is known about the role that action may play during the processing of semantic relationships (particularly thematic relationships) between multiple objects. In this study, we assessed healthy and left hemisphere stroke participants to explore three questions relevant to object relationship processing. First, we assessed whether participants tended to favor thematic relations including action (Th+A, e.g., wine bottle-corkscrew), thematic relationships without action (Th-A, e.g., wine bottle-cheese), or taxonomic relationships (Tax, e.g., wine bottle-water bottle) when choosing between them in an association judgment task with manipulable artifacts. Second, we assessed whether the underlying constructs of event relatedness, action relatedness, and categorical relatedness determined the choices that participants made. Third, we assessed the hypothesis that degraded action knowledge and/or damage to temporo-parietal cortex, a region of the brain associated with the representation of action knowledge, would reduce the influence of action on the choice task. Experiment 1 showed that explicit ratings of event, action, and categorical relatedness were differentially predictive of healthy participants' choices, with action relatedness determining choices between Th+A and Th-A associations above and beyond event and categorical ratings. Experiment 2 focused more specifically on these Th+A vs. Th-A choices and demonstrated that participants with left temporo-parietal lesions, a brain region known to be involved in sensorimotor processing, were less likely than controls and tended to be less likely than patients with lesions sparing that region to use action relatedness in determining their choices. These data indicate that action knowledge plays a critical role in processing of thematic relations for manipulable artifacts.
Forming maps of targets having multiple reflectors with a biomimetic audible sonar.
Kuc, Roman
2018-05-01
A biomimetic audible sonar mimics human echolocation by emitting clicks and sensing echoes binaurally to investigate the limitations in acoustic mapping of 2.5 dimensional targets. A monaural sonar that provides only echo time-of-flight values produces biased maps that lie outside the target surfaces. Reflector bearing estimates derived from the first echoes detected by a binaural sonar are employed to form unbiased maps. Multiple echoes from a target introduce phantom-reflector artifacts into its map because later echoes are produced by reflectors at bearings different from those determined from the first echoes. In addition, overlapping echoes interfere to produce bearing errors. Addressing the causes of these bearing errors motivates a processing approach that employs template matching to extract valid echoes. Interfering echoes can mimic a valid echo and also form PR artifacts. These artifacts are eliminated by recognizing the bearing fluctuations that characterize echo interference. Removing PR artifacts produces a map that resembles the physical target shape to within the resolution capabilities of the sonar. The remaining differences between the target shape and the final map are void artifacts caused by invalid or missing echoes.
Integrated SSFP for functional brain mapping at 7 T with reduced susceptibility artifact
NASA Astrophysics Data System (ADS)
Sun, Kaibao; Xue, Rong; Zhang, Peng; Zuo, Zhentao; Chen, Zhongwei; Wang, Bo; Martin, Thomas; Wang, Yi; Chen, Lin; He, Sheng; Wang, Danny J. J.
2017-03-01
Balanced steady-state free precession (bSSFP) offers an alternative and potentially important tool to the standard gradient-echo echo-planar imaging (GE-EPI) for functional MRI (fMRI). Both passband and transition band based bSSFP have been proposed for fMRI. The applications of these methods, however, are limited by banding artifacts due to the sensitivity of bSSFP signal to off-resonance effects. In this article, a unique case of the SSFP-FID sequence, termed integrated-SSFP or iSSFP, was proposed to overcome the obstacle by compressing the SSFP profile into the width of a single voxel. The magnitude of the iSSFP signal was kept constant irrespective of frequency shift. Visual stimulation studies were performed to demonstrate the feasibility of fMRI using iSSFP at 7 T with flip angles of 4° and 25°, compared to standard bSSFP and gradient echo (GRE) imaging. The signal changes for the complex iSSFP signal in activated voxels were 2.48 ± 0.53 (%) and 2.96 ± 0.87 (%) for flip angles (FA) of 4° and 25° respectively at the TR of 9.88 ms. Simultaneous multi-slice acquisition (SMS) with the CAIPIRIHNA technique was carried out with iSSFP scanning to detect the anterior temporal lobe activation using a semantic processing task fMRI, compared with standard 2D GE-EPI. This study demonstrates the feasibility of iSSFP for fMRI with reduced susceptibility artifacts, while maintaining robust functional contrast at 7 T.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Delgoshaei, Parastoo; Austin, Mark A.; Pertzborn, Amanda J.
State-of-the-art building simulation control methods incorporate physical constraints into their mathematical models, but omit implicit constraints associated with policies of operation and dependency relationships among rules representing those constraints. To overcome these shortcomings, there is a recent trend in enabling the control strategies with inference-based rule checking capabilities. One solution is to exploit semantic web technologies in building simulation control. Such approaches provide the tools for semantic modeling of domains, and the ability to deduce new information based on the models through use of Description Logic (DL). In a step toward enabling this capability, this paper presents a cross-disciplinary data-drivenmore » control strategy for building energy management simulation that integrates semantic modeling and formal rule checking mechanisms into a Model Predictive Control (MPC) formulation. The results show that MPC provides superior levels of performance when initial conditions and inputs are derived from inference-based rules.« less
Assessing semantic similarity of texts - Methods and algorithms
NASA Astrophysics Data System (ADS)
Rozeva, Anna; Zerkova, Silvia
2017-12-01
Assessing the semantic similarity of texts is an important part of different text-related applications like educational systems, information retrieval, text summarization, etc. This task is performed by sophisticated analysis, which implements text-mining techniques. Text mining involves several pre-processing steps, which provide for obtaining structured representative model of the documents in a corpus by means of extracting and selecting the features, characterizing their content. Generally the model is vector-based and enables further analysis with knowledge discovery approaches. Algorithms and measures are used for assessing texts at syntactical and semantic level. An important text-mining method and similarity measure is latent semantic analysis (LSA). It provides for reducing the dimensionality of the document vector space and better capturing the text semantics. The mathematical background of LSA for deriving the meaning of the words in a given text by exploring their co-occurrence is examined. The algorithm for obtaining the vector representation of words and their corresponding latent concepts in a reduced multidimensional space as well as similarity calculation are presented.
Yang, Ping; Dumont, Guy A; Ansermino, J Mark
2009-04-01
Intraoperative heart rate is routinely measured independently from the ECG monitor, pulse oximeter, and the invasive blood pressure monitor if available. The presence of artifacts, in one or more of theses signals, especially sustained artifacts, represents a critical challenge for physiological monitoring. When temporal filters are used to suppress sustained artifacts, unwanted delays or signal distortion are often introduced. The aim of this study was to remove artifacts and derive accurate estimates for the heart rate signal by using measurement redundancy. Heart rate measurements from multiple sensors and previous estimates that fall in a short moving window were treated as samples of the same heart rate. A hybrid median filter was used to align these samples into one ordinal series and to select the median as the fused estimate. This method can successfully remove artifacts that are sustained for shorter than half the length of the filter window, or artifacts that are sustained for a longer duration but presented in no more than half of the sensors. The method was tested on both simulated and clinical cases. The performance of the hybrid median filter in the simulated study was compared with that of a two-step estimation process, comprising a threshold-controlled artifact-removal module and a Kalman filter. The estimation accuracy of the hybrid median filter is better than that of the Kalman filter in the presence of artifacts. The hybrid median filter combines the structural and temporal information from two or more sensors and generates a robust estimate of heart rate without requiring strict assumptions about the signal's characteristics. This method is intuitive, computationally simple, and the performance can be easily adjusted. These considerable benefits make this method highly suitable for clinical use.
Empirical Distributional Semantics: Methods and Biomedical Applications
Cohen, Trevor; Widdows, Dominic
2009-01-01
Over the past fifteen years, a range of methods have been developed that are able to learn human-like estimates of the semantic relatedness between terms from the way in which these terms are distributed in a corpus of unannotated natural language text. These methods have also been evaluated in a number of applications in the cognitive science, computational linguistics and the information retrieval literatures. In this paper, we review the available methodologies for derivation of semantic relatedness from free text, as well as their evaluation in a variety of biomedical and other applications. Recent methodological developments, and their applicability to several existing applications are also discussed. PMID:19232399
Multi-Objective Memetic Search for Robust Motion and Distortion Correction in Diffusion MRI.
Hering, Jan; Wolf, Ivo; Maier-Hein, Klaus H
2016-10-01
Effective image-based artifact correction is an essential step in the analysis of diffusion MR images. Many current approaches are based on retrospective registration, which becomes challenging in the realm of high b -values and low signal-to-noise ratio, rendering the corresponding correction schemes more and more ineffective. We propose a novel registration scheme based on memetic search optimization that allows for simultaneous exploitation of different signal intensity relationships between the images, leading to more robust registration results. We demonstrate the increased robustness and efficacy of our method on simulated as well as in vivo datasets. In contrast to the state-of-art methods, the median target registration error (TRE) stayed below the voxel size even for high b -values (3000 s ·mm -2 and higher) and low SNR conditions. We also demonstrate the increased precision in diffusion-derived quantities by evaluating Neurite Orientation Dispersion and Density Imaging (NODDI) derived measures on a in vivo dataset with severe motion artifacts. These promising results will potentially inspire further studies on metaheuristic optimization in diffusion MRI artifact correction and image registration in general.
Wagner, Wolfgang; Hansen, Karolina; Kronberger, Nicole
2014-12-01
Growing globalisation of the world draws attention to cultural differences between people from different countries or from different cultures within the countries. Notwithstanding the diversity of people's worldviews, current cross-cultural research still faces the challenge of how to avoid ethnocentrism; comparing Western-driven phenomena with like variables across countries without checking their conceptual equivalence clearly is highly problematic. In the present article we argue that simple comparison of measurements (in the quantitative domain) or of semantic interpretations (in the qualitative domain) across cultures easily leads to inadequate results. Questionnaire items or text produced in interviews or via open-ended questions have culturally laden meanings and cannot be mapped onto the same semantic metric. We call the culture-specific space and relationship between variables or meanings a 'cultural metric', that is a set of notions that are inter-related and that mutually specify each other's meaning. We illustrate the problems and their possible solutions with examples from quantitative and qualitative research. The suggested methods allow to respect the semantic space of notions in cultures and language groups and the resulting similarities or differences between cultures can be better understood and interpreted.
ERIC Educational Resources Information Center
Havas, Viktoria; Rodriguez-Fornells, Antoni; Clahsen, Harald
2012-01-01
This study investigates brain potentials to derived word forms in Spanish. Two experiments were performed on derived nominals that differ in terms of their productivity and semantic properties but are otherwise similar, an acceptability judgment task and a reading experiment using event-related brain potentials (ERPs) in which correctly and…
Statistical Feature Extraction for Artifact Removal from Concurrent fMRI-EEG Recordings
Liu, Zhongming; de Zwart, Jacco A.; van Gelderen, Peter; Kuo, Li-Wei; Duyn, Jeff H.
2011-01-01
We propose a set of algorithms for sequentially removing artifacts related to MRI gradient switching and cardiac pulsations from electroencephalography (EEG) data recorded during functional magnetic resonance imaging (fMRI). Special emphases are directed upon the use of statistical metrics and methods for the extraction and selection of features that characterize gradient and pulse artifacts. To remove gradient artifacts, we use a channel-wise filtering based on singular value decomposition (SVD). To remove pulse artifacts, we first decompose data into temporally independent components and then select a compact cluster of components that possess sustained high mutual information with the electrocardiogram (ECG). After the removal of these components, the time courses of remaining components are filtered by SVD to remove the temporal patterns phase-locked to the cardiac markers derived from the ECG. The filtered component time courses are then inversely transformed into multi-channel EEG time series free of pulse artifacts. Evaluation based on a large set of simultaneous EEG-fMRI data obtained during a variety of behavioral tasks, sensory stimulations and resting conditions showed excellent data quality and robust performance attainable by the proposed methods. These algorithms have been implemented as a Matlab-based toolbox made freely available for public access and research use. PMID:22036675
Ye-Lin, Yiyao; Alberola-Rubio, José; Perales, Alfredo
2014-01-01
Electrohysterography (EHG) is a noninvasive technique for monitoring uterine electrical activity. However, the presence of artifacts in the EHG signal may give rise to erroneous interpretations and make it difficult to extract useful information from these recordings. The aim of this work was to develop an automatic system of segmenting EHG recordings that distinguishes between uterine contractions and artifacts. Firstly, the segmentation is performed using an algorithm that generates the TOCO-like signal derived from the EHG and detects windows with significant changes in amplitude. After that, these segments are classified in two groups: artifacted and nonartifacted signals. To develop a classifier, a total of eleven spectral, temporal, and nonlinear features were calculated from EHG signal windows from 12 women in the first stage of labor that had previously been classified by experts. The combination of characteristics that led to the highest degree of accuracy in detecting artifacts was then determined. The results showed that it is possible to obtain automatic detection of motion artifacts in segmented EHG recordings with a precision of 92.2% using only seven features. The proposed algorithm and classifier together compose a useful tool for analyzing EHG signals and would help to promote clinical applications of this technique. PMID:24523828
Ye-Lin, Yiyao; Garcia-Casado, Javier; Prats-Boluda, Gema; Alberola-Rubio, José; Perales, Alfredo
2014-01-01
Electrohysterography (EHG) is a noninvasive technique for monitoring uterine electrical activity. However, the presence of artifacts in the EHG signal may give rise to erroneous interpretations and make it difficult to extract useful information from these recordings. The aim of this work was to develop an automatic system of segmenting EHG recordings that distinguishes between uterine contractions and artifacts. Firstly, the segmentation is performed using an algorithm that generates the TOCO-like signal derived from the EHG and detects windows with significant changes in amplitude. After that, these segments are classified in two groups: artifacted and nonartifacted signals. To develop a classifier, a total of eleven spectral, temporal, and nonlinear features were calculated from EHG signal windows from 12 women in the first stage of labor that had previously been classified by experts. The combination of characteristics that led to the highest degree of accuracy in detecting artifacts was then determined. The results showed that it is possible to obtain automatic detection of motion artifacts in segmented EHG recordings with a precision of 92.2% using only seven features. The proposed algorithm and classifier together compose a useful tool for analyzing EHG signals and would help to promote clinical applications of this technique.
Statistical feature extraction for artifact removal from concurrent fMRI-EEG recordings.
Liu, Zhongming; de Zwart, Jacco A; van Gelderen, Peter; Kuo, Li-Wei; Duyn, Jeff H
2012-02-01
We propose a set of algorithms for sequentially removing artifacts related to MRI gradient switching and cardiac pulsations from electroencephalography (EEG) data recorded during functional magnetic resonance imaging (fMRI). Special emphasis is directed upon the use of statistical metrics and methods for the extraction and selection of features that characterize gradient and pulse artifacts. To remove gradient artifacts, we use channel-wise filtering based on singular value decomposition (SVD). To remove pulse artifacts, we first decompose data into temporally independent components and then select a compact cluster of components that possess sustained high mutual information with the electrocardiogram (ECG). After the removal of these components, the time courses of remaining components are filtered by SVD to remove the temporal patterns phase-locked to the cardiac timing markers derived from the ECG. The filtered component time courses are then inversely transformed into multi-channel EEG time series free of pulse artifacts. Evaluation based on a large set of simultaneous EEG-fMRI data obtained during a variety of behavioral tasks, sensory stimulations and resting conditions showed excellent data quality and robust performance attainable with the proposed methods. These algorithms have been implemented as a Matlab-based toolbox made freely available for public access and research use. Published by Elsevier Inc.
Semantics driven approach for knowledge acquisition from EMRs.
Perera, Sujan; Henson, Cory; Thirunarayan, Krishnaprasad; Sheth, Amit; Nair, Suhas
2014-03-01
Semantic computing technologies have matured to be applicable to many critical domains such as national security, life sciences, and health care. However, the key to their success is the availability of a rich domain knowledge base. The creation and refinement of domain knowledge bases pose difficult challenges. The existing knowledge bases in the health care domain are rich in taxonomic relationships, but they lack nontaxonomic (domain) relationships. In this paper, we describe a semiautomatic technique for enriching existing domain knowledge bases with causal relationships gleaned from Electronic Medical Records (EMR) data. We determine missing causal relationships between domain concepts by validating domain knowledge against EMR data sources and leveraging semantic-based techniques to derive plausible relationships that can rectify knowledge gaps. Our evaluation demonstrates that semantic techniques can be employed to improve the efficiency of knowledge acquisition.
2011-01-01
Background Although many biological databases are applying semantic web technologies, meaningful biological hypothesis testing cannot be easily achieved. Database-driven high throughput genomic hypothesis testing requires both of the capabilities of obtaining semantically relevant experimental data and of performing relevant statistical testing for the retrieved data. Tissue Microarray (TMA) data are semantically rich and contains many biologically important hypotheses waiting for high throughput conclusions. Methods An application-specific ontology was developed for managing TMA and DNA microarray databases by semantic web technologies. Data were represented as Resource Description Framework (RDF) according to the framework of the ontology. Applications for hypothesis testing (Xperanto-RDF) for TMA data were designed and implemented by (1) formulating the syntactic and semantic structures of the hypotheses derived from TMA experiments, (2) formulating SPARQLs to reflect the semantic structures of the hypotheses, and (3) performing statistical test with the result sets returned by the SPARQLs. Results When a user designs a hypothesis in Xperanto-RDF and submits it, the hypothesis can be tested against TMA experimental data stored in Xperanto-RDF. When we evaluated four previously validated hypotheses as an illustration, all the hypotheses were supported by Xperanto-RDF. Conclusions We demonstrated the utility of high throughput biological hypothesis testing. We believe that preliminary investigation before performing highly controlled experiment can be benefited. PMID:21342584
Gruenenfelder, Thomas M; Recchia, Gabriel; Rubin, Tim; Jones, Michael N
2016-08-01
We compared the ability of three different contextual models of lexical semantic memory (BEAGLE, Latent Semantic Analysis, and the Topic model) and of a simple associative model (POC) to predict the properties of semantic networks derived from word association norms. None of the semantic models were able to accurately predict all of the network properties. All three contextual models over-predicted clustering in the norms, whereas the associative model under-predicted clustering. Only a hybrid model that assumed that some of the responses were based on a contextual model and others on an associative network (POC) successfully predicted all of the network properties and predicted a word's top five associates as well as or better than the better of the two constituent models. The results suggest that participants switch between a contextual representation and an associative network when generating free associations. We discuss the role that each of these representations may play in lexical semantic memory. Concordant with recent multicomponent theories of semantic memory, the associative network may encode coordinate relations between concepts (e.g., the relation between pea and bean, or between sparrow and robin), and contextual representations may be used to process information about more abstract concepts. Copyright © 2015 Cognitive Science Society, Inc.
Mining Hierarchies and Similarity Clusters from Value Set Repositories.
Peterson, Kevin J; Jiang, Guoqian; Brue, Scott M; Shen, Feichen; Liu, Hongfang
2017-01-01
A value set is a collection of permissible values used to describe a specific conceptual domain for a given purpose. By helping to establish a shared semantic understanding across use cases, these artifacts are important enablers of interoperability and data standardization. As the size of repositories cataloging these value sets expand, knowledge management challenges become more pronounced. Specifically, discovering value sets applicable to a given use case may be challenging in a large repository. In this study, we describe methods to extract implicit relationships between value sets, and utilize these relationships to overlay organizational structure onto value set repositories. We successfully extract two different structurings, hierarchy and clustering, and show how tooling can leverage these structures to enable more effective value set discovery.
A multilingual gold-standard corpus for biomedical concept recognition: the Mantra GSC
Clematide, Simon; Akhondi, Saber A; van Mulligen, Erik M; Rebholz-Schuhmann, Dietrich
2015-01-01
Objective To create a multilingual gold-standard corpus for biomedical concept recognition. Materials and methods We selected text units from different parallel corpora (Medline abstract titles, drug labels, biomedical patent claims) in English, French, German, Spanish, and Dutch. Three annotators per language independently annotated the biomedical concepts, based on a subset of the Unified Medical Language System and covering a wide range of semantic groups. To reduce the annotation workload, automatically generated preannotations were provided. Individual annotations were automatically harmonized and then adjudicated, and cross-language consistency checks were carried out to arrive at the final annotations. Results The number of final annotations was 5530. Inter-annotator agreement scores indicate good agreement (median F-score 0.79), and are similar to those between individual annotators and the gold standard. The automatically generated harmonized annotation set for each language performed equally well as the best annotator for that language. Discussion The use of automatic preannotations, harmonized annotations, and parallel corpora helped to keep the manual annotation efforts manageable. The inter-annotator agreement scores provide a reference standard for gauging the performance of automatic annotation techniques. Conclusion To our knowledge, this is the first gold-standard corpus for biomedical concept recognition in languages other than English. Other distinguishing features are the wide variety of semantic groups that are being covered, and the diversity of text genres that were annotated. PMID:25948699
Architectural approaches for HL7-based health information systems implementation.
López, D M; Blobel, B
2010-01-01
Information systems integration is hard, especially when semantic and business process interoperability requirements need to be met. To succeed, a unified methodology, approaching different aspects of systems architecture such as business, information, computational, engineering and technology viewpoints, has to be considered. The paper contributes with an analysis and demonstration on how the HL7 standard set can support health information systems integration. Based on the Health Information Systems Development Framework (HIS-DF), common architectural models for HIS integration are analyzed. The framework is a standard-based, consistent, comprehensive, customizable, scalable methodology that supports the design of semantically interoperable health information systems and components. Three main architectural models for system integration are analyzed: the point to point interface, the messages server and the mediator models. Point to point interface and messages server models are completely supported by traditional HL7 version 2 and version 3 messaging. The HL7 v3 standard specification, combined with service-oriented, model-driven approaches provided by HIS-DF, makes the mediator model possible. The different integration scenarios are illustrated by describing a proof-of-concept implementation of an integrated public health surveillance system based on Enterprise Java Beans technology. Selecting the appropriate integration architecture is a fundamental issue of any software development project. HIS-DF provides a unique methodological approach guiding the development of healthcare integration projects. The mediator model - offered by the HIS-DF and supported in HL7 v3 artifacts - is the more promising one promoting the development of open, reusable, flexible, semantically interoperable, platform-independent, service-oriented and standard-based health information systems.
Natural Language Processing (NLP), Machine Learning (ML), and Semantics in Polar Science
NASA Astrophysics Data System (ADS)
Duerr, R.; Ramdeen, S.
2017-12-01
One of the interesting features of Polar Science is that it historically has been extremely interdisciplinary, encompassing all of the physical and social sciences. Given the ubiquity of specialized terminology in each field, enabling researchers to find, understand, and use all of the heterogeneous data needed for polar research continues to be a bottleneck. Within the informatics community, semantics has broadly accepted as a solution to these problems, yet progress in developing reusable semantic resources has been slow. The NSF-funded ClearEarth project has been adapting the methods and tools from other communities such as Biomedicine to the Earth sciences with the goal of enhancing progress and the rate at which the needed semantic resources can be created. One of the outcomes of the project has been a better understanding of the differences in the way linguists and physical scientists understand disciplinary text. One example of these differences is the tendency for each discipline and often disciplinary subfields to expend effort in creating discipline specific glossaries where individual terms often are comprised of more than one word (e.g., first-year sea ice). Often each term in a glossary is imbued with substantial contextual or physical meaning - meanings which are rarely explicitly called out within disciplinary texts; meaning which are therefore not immediately accessible to those outside that discipline or subfield; meanings which can often be represented semantically. Here we show how recognition of these difference and the use of glossaries can be used to speed up the annotation processes endemic to NLP, enable inter-community recognition and possible reconciliation of terminology differences. A number of processes and tools will be described, as will progress towards semi-automated generation of ontology structures.
Roles of frontal and temporal regions in reinterpreting semantically ambiguous sentences
Vitello, Sylvia; Warren, Jane E.; Devlin, Joseph T.; Rodd, Jennifer M.
2014-01-01
Semantic ambiguity resolution is an essential and frequent part of speech comprehension because many words map onto multiple meanings (e.g., “bark,” “bank”). Neuroimaging research highlights the importance of the left inferior frontal gyrus (LIFG) and the left posterior temporal cortex in this process but the roles they serve in ambiguity resolution are uncertain. One possibility is that both regions are engaged in the processes of semantic reinterpretation that follows incorrect interpretation of an ambiguous word. Here we used fMRI to investigate this hypothesis. 20 native British English monolinguals were scanned whilst listening to sentences that contained an ambiguous word. To induce semantic reinterpretation, the disambiguating information was presented after the ambiguous word and delayed until the end of the sentence (e.g., “the teacher explained that the BARK was going to be very damp”). These sentences were compared to well-matched unambiguous sentences. Supporting the reinterpretation hypothesis, these ambiguous sentences produced more activation in both the LIFG and the left posterior inferior temporal cortex. Importantly, all but one subject showed ambiguity-related peaks within both regions, demonstrating that the group-level results were driven by high inter-subject consistency. Further support came from the finding that activation in both regions was modulated by meaning dominance. Specifically, sentences containing biased ambiguous words, which have one more dominant meaning, produced greater activation than those with balanced ambiguous words, which have two equally frequent meanings. Because the context always supported the less frequent meaning, the biased words require reinterpretation more often than balanced words. This is the first evidence of dominance effects in the spoken modality and provides strong support that frontal and temporal regions support the updating of semantic representations during speech comprehension. PMID:25120445
Tenenbaum, Jessica D.; Whetzel, Patricia L.; Anderson, Kent; Borromeo, Charles D.; Dinov, Ivo D.; Gabriel, Davera; Kirschner, Beth; Mirel, Barbara; Morris, Tim; Noy, Natasha; Nyulas, Csongor; Rubenson, David; Saxman, Paul R.; Singh, Harpreet; Whelan, Nancy; Wright, Zach; Athey, Brian D.; Becich, Michael J.; Ginsburg, Geoffrey S.; Musen, Mark A.; Smith, Kevin A.; Tarantal, Alice F.; Rubin, Daniel L; Lyster, Peter
2010-01-01
The biomedical research community relies on a diverse set of resources, both within their own institutions and at other research centers. In addition, an increasing number of shared electronic resources have been developed. Without effective means to locate and query these resources, it is challenging, if not impossible, for investigators to be aware of the myriad resources available, or to effectively perform resource discovery when the need arises. In this paper, we describe the development and use of the Biomedical Resource Ontology (BRO) to enable semantic annotation and discovery of biomedical resources. We also describe the Resource Discovery System (RDS) which is a federated, inter-institutional pilot project that uses the BRO to facilitate resource discovery on the Internet. Through the RDS framework and its associated Biositemaps infrastructure, the BRO facilitates semantic search and discovery of biomedical resources, breaking down barriers and streamlining scientific research that will improve human health. PMID:20955817
Structural Group-based Auditing of Missing Hierarchical Relationships in UMLS
Chen, Yan; Gu, Huanying(Helen); Perl, Yehoshua; Geller, James
2009-01-01
The Metathesaurus of the UMLS was created by integrating various source terminologies. The inter-concept relationships were either integrated into the UMLS from the source terminologies or specially generated. Due to the extensive size and inherent complexity of the Metathesaurus, the accidental omission of some hierarchical relationships was inevitable. We present a recursive procedure which allows a human expert, with the support of an algorithm, to locate missing hierarchical relationships. The procedure starts with a group of concepts with exactly the same (correct) semantic type assignments. It then partitions the concepts, based on child-of hierarchical relationships, into smaller, singly rooted, hierarchically connected subgroups. The auditor only needs to focus on the subgroups with very few concepts and their concepts with semantic type reassignments. The procedure was evaluated by comparing it with a comprehensive manual audit and it exhibits a perfect error recall. PMID:18824248
NASA Technical Reports Server (NTRS)
Wiseman, S.M.; Arvidson, R.E.; Wolff, M. J.; Smith, M. D.; Seelos, F. P.; Morgan, F.; Murchie, S. L.; Mustard, J. F.; Morris, R. V.; Humm, D.;
2014-01-01
The empirical volcano-scan atmospheric correction is widely applied to Martian near infrared CRISM and OMEGA spectra between 1000 and 2600 nanometers to remove prominent atmospheric gas absorptions with minimal computational investment. This correction method employs division by a scaled empirically-derived atmospheric transmission spectrum that is generated from observations of the Martian surface in which different path lengths through the atmosphere were measured and transmission calculated using the Beer-Lambert Law. Identifying and characterizing both artifacts and residual atmospheric features left by the volcano-scan correction is important for robust interpretation of CRISM and OMEGA volcano scan corrected spectra. In order to identify and determine the cause of spectral artifacts introduced by the volcano-scan correction, we simulated this correction using a multiple scattering radiative transfer algorithm (DISORT). Simulated transmission spectra that are similar to actual CRISM- and OMEGA-derived transmission spectra were generated from modeled Olympus Mons base and summit spectra. Results from the simulations were used to investigate the validity of assumptions inherent in the volcano-scan correction and to identify artifacts introduced by this method of atmospheric correction. We found that the most prominent artifact, a bowl-shaped feature centered near 2000 nanometers, is caused by the inaccurate assumption that absorption coefficients of CO2 in the Martian atmosphere are independent of column density. In addition, spectral albedo and slope are modified by atmospheric aerosols. Residual atmospheric contributions that are caused by variable amounts of dust aerosols, ice aerosols, and water vapor are characterized by the analysis of CRISM volcano-scan corrected spectra from the same location acquired at different times under variable atmospheric conditions.
NASA Astrophysics Data System (ADS)
Wiseman, S. M.; Arvidson, R. E.; Wolff, M. J.; Smith, M. D.; Seelos, F. P.; Morgan, F.; Murchie, S. L.; Mustard, J. F.; Morris, R. V.; Humm, D.; McGuire, P. C.
2016-05-01
The empirical 'volcano-scan' atmospheric correction is widely applied to martian near infrared CRISM and OMEGA spectra between ∼1000 and ∼2600 nm to remove prominent atmospheric gas absorptions with minimal computational investment. This correction method employs division by a scaled empirically-derived atmospheric transmission spectrum that is generated from observations of the martian surface in which different path lengths through the atmosphere were measured and transmission calculated using the Beer-Lambert Law. Identifying and characterizing both artifacts and residual atmospheric features left by the volcano-scan correction is important for robust interpretation of CRISM and OMEGA volcano-scan corrected spectra. In order to identify and determine the cause of spectral artifacts introduced by the volcano-scan correction, we simulated this correction using a multiple scattering radiative transfer algorithm (DISORT). Simulated transmission spectra that are similar to actual CRISM- and OMEGA-derived transmission spectra were generated from modeled Olympus Mons base and summit spectra. Results from the simulations were used to investigate the validity of assumptions inherent in the volcano-scan correction and to identify artifacts introduced by this method of atmospheric correction. We found that the most prominent artifact, a bowl-shaped feature centered near 2000 nm, is caused by the inaccurate assumption that absorption coefficients of CO2 in the martian atmosphere are independent of column density. In addition, spectral albedo and slope are modified by atmospheric aerosols. Residual atmospheric contributions that are caused by variable amounts of dust aerosols, ice aerosols, and water vapor are characterized by the analysis of CRISM volcano-scan corrected spectra from the same location acquired at different times under variable atmospheric conditions.
Signal processing using sparse derivatives with applications to chromatograms and ECG
NASA Astrophysics Data System (ADS)
Ning, Xiaoran
In this thesis, we investigate the sparsity exist in the derivative domain. Particularly, we focus on the type of signals which posses up to Mth (M > 0) order sparse derivatives. Efforts are put on formulating proper penalty functions and optimization problems to capture properties related to sparse derivatives, searching for fast, computationally efficient solvers. Also the effectiveness of these algorithms are applied to two real world applications. In the first application, we provide an algorithm which jointly addresses the problems of chromatogram baseline correction and noise reduction. The series of chromatogram peaks are modeled as sparse with sparse derivatives, and the baseline is modeled as a low-pass signal. A convex optimization problem is formulated so as to encapsulate these non-parametric models. To account for the positivity of chromatogram peaks, an asymmetric penalty function is also utilized with symmetric penalty functions. A robust, computationally efficient, iterative algorithm is developed that is guaranteed to converge to the unique optimal solution. The approach, termed Baseline Estimation And Denoising with Sparsity (BEADS), is evaluated and compared with two state-of-the-art methods using both simulated and real chromatogram data. Promising result is obtained. In the second application, a novel Electrocardiography (ECG) enhancement algorithm is designed also based on sparse derivatives. In the real medical environment, ECG signals are often contaminated by various kinds of noise or artifacts, for example, morphological changes due to motion artifact, non-stationary noise due to muscular contraction (EMG), etc. Some of these contaminations severely affect the usefulness of ECG signals, especially when computer aided algorithms are utilized. By solving the proposed convex l1 optimization problem, artifacts are reduced by modeling the clean ECG signal as a sum of two signals whose second and third-order derivatives (differences) are sparse respectively. At the end, the algorithm is applied to a QRS detection system and validated using the MIT-BIH Arrhythmia database (109452 anotations), resulting a sensitivity of Se = 99.87%$ and a positive prediction of +P = 99.88%.
NASA Astrophysics Data System (ADS)
Valderrama, Joaquin T.; de la Torre, Angel; Van Dun, Bram
2018-02-01
Objective. Artifact reduction in electroencephalogram (EEG) signals is usually necessary to carry out data analysis appropriately. Despite the large amount of denoising techniques available with a multichannel setup, there is a lack of efficient algorithms that remove (not only detect) blink-artifacts from a single channel EEG, which is of interest in many clinical and research applications. This paper describes and evaluates the iterative template matching and suppression (ITMS), a new method proposed for detecting and suppressing the artifact associated with the blink activity from a single channel EEG. Approach. The approach of ITMS consists of (a) an iterative process in which blink-events are detected and the blink-artifact waveform of the analyzed subject is estimated, (b) generation of a signal modeling the blink-artifact, and (c) suppression of this signal from the raw EEG. The performance of ITMS is compared with the multi-window summation of derivatives within a window (MSDW) technique using both synthesized and real EEG data. Main results. Results suggest that ITMS presents an adequate performance in detecting and suppressing blink-artifacts from a single channel EEG. When applied to the analysis of cortical auditory evoked potentials (CAEPs), ITMS provides a significant quality improvement in the resulting responses, i.e. in a cohort of 30 adults, the mean correlation coefficient improved from 0.37 to 0.65 when the blink-artifacts were detected and suppressed by ITMS. Significance. ITMS is an efficient solution to the problem of denoising blink-artifacts in single-channel EEG applications, both in clinical and research fields. The proposed ITMS algorithm is stable; automatic, since it does not require human intervention; low-invasive, because the EEG segments not contaminated by blink-artifacts remain unaltered; and easy to implement, as can be observed in the Matlab script implemeting the algorithm provided as supporting material.
Rodway, Paul; Kirkham, Julie; Schepman, Astrid; Lambert, Jordana; Locke, Anastasia
2016-01-01
Understanding how aesthetic preferences are shared among individuals, and its developmental time course, is a fundamental question in aesthetics. It has been shown that semantic associations, in response to representational artworks, overlap more strongly among individuals than those generated by abstract artworks and that the emotional valence of the associations also overlaps more for representational artworks. This valence response may be a key driver in aesthetic appreciation. The current study tested predictions derived from the semantic association account in a developmental context. Twenty 4-, 6-, 8- and 10-year-old children (n = 80) were shown 20 artworks (10 representational, 10 abstract) and were asked to rate each artwork and to explain their decision. Cross-observer agreement in aesthetic preferences increased with age from 4–8 years for both abstract and representational art. However, after age 6 the level of shared appreciation for representational and abstract artworks diverged, with significantly higher levels of agreement for representational than abstract artworks at age 8 and 10. The most common justifications for representational artworks involved subject matter, while for abstract artworks formal artistic properties and color were the most commonly used justifications. Representational artwork also showed a significantly higher proportion of associations and emotional responses than abstract artworks. In line with predictions from developmental cognitive neuroscience, references to the artist as an agent increased between ages 4 and 6 and again between ages 6 and 8, following the development of Theory of Mind. The findings support the view that increased experience with representational content during the life span reduces inter-individual variation in aesthetic appreciation and increases shared preferences. In addition, brain and cognitive development appear to impact on art appreciation at milestone ages. PMID:26903834
Rodway, Paul; Kirkham, Julie; Schepman, Astrid; Lambert, Jordana; Locke, Anastasia
2016-01-01
Understanding how aesthetic preferences are shared among individuals, and its developmental time course, is a fundamental question in aesthetics. It has been shown that semantic associations, in response to representational artworks, overlap more strongly among individuals than those generated by abstract artworks and that the emotional valence of the associations also overlaps more for representational artworks. This valence response may be a key driver in aesthetic appreciation. The current study tested predictions derived from the semantic association account in a developmental context. Twenty 4-, 6-, 8- and 10-year-old children (n = 80) were shown 20 artworks (10 representational, 10 abstract) and were asked to rate each artwork and to explain their decision. Cross-observer agreement in aesthetic preferences increased with age from 4-8 years for both abstract and representational art. However, after age 6 the level of shared appreciation for representational and abstract artworks diverged, with significantly higher levels of agreement for representational than abstract artworks at age 8 and 10. The most common justifications for representational artworks involved subject matter, while for abstract artworks formal artistic properties and color were the most commonly used justifications. Representational artwork also showed a significantly higher proportion of associations and emotional responses than abstract artworks. In line with predictions from developmental cognitive neuroscience, references to the artist as an agent increased between ages 4 and 6 and again between ages 6 and 8, following the development of Theory of Mind. The findings support the view that increased experience with representational content during the life span reduces inter-individual variation in aesthetic appreciation and increases shared preferences. In addition, brain and cognitive development appear to impact on art appreciation at milestone ages.
Solid-state circularly polarized luminescence measurements: Theoretical analysis
NASA Astrophysics Data System (ADS)
Harada, Takunori; Kuroda, Reiko; Moriyama, Hiroshi
2012-03-01
Because a circularly polarized luminescence (CPL) spectrophotometer is a polarization-modulation instrument, artifacts resulting from optical anisotropies that are unique to the solid state necessarily accompany CPL signals. A set of procedures for obtaining the true CPL signal has been derived based on the Stokes-Mueller matrix method. Experiments on chiral fluorophore single crystals of benzil with larger and smaller optical anisotropies have shown that our method can eliminate parasitic artifacts to obtain the true CPL signal, even in cases where optical anisotropies are substantial.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, H; Kong, V; Jin, J
Purpose: A synchronized moving grid (SMOG) has been proposed to reduce scatter and lag artifacts in cone beam computed tomography (CBCT). However, information is missing in each projection because certain areas are blocked by the grid. A previous solution to this issue is acquiring 2 complimentary projections at each position, which increases scanning time. This study reports our first Result using an inter-projection sensor fusion (IPSF) method to estimate missing projection in our prototype SMOG-based CBCT system. Methods: An in-house SMOG assembling with a 1:1 grid of 3 mm gap has been installed in a CBCT benchtop. The grid movesmore » back and forth in a 3-mm amplitude and up-to 20-Hz frequency. A control program in LabView synchronizes the grid motion with the platform rotation and x-ray firing so that the grid patterns for any two neighboring projections are complimentary. A Catphan was scanned with 360 projections. After scatter correction, the IPSF algorithm was applied to estimate missing signal for each projection using the information from the 2 neighboring projections. Feldkamp-Davis-Kress (FDK) algorithm was applied to reconstruct CBCT images. The CBCTs were compared to those reconstructed using normal projections without applying the SMOG system. Results: The SMOG-IPSF method may reduce image dose by half due to the blocked radiation by the grid. The method almost completely removed scatter related artifacts, such as the cupping artifacts. The evaluation of line pair patterns in the CatPhan suggested that the spatial resolution degradation was minimal. Conclusion: The SMOG-IPSF is promising in reducing scatter artifacts and improving image quality while reducing radiation dose.« less
Martínez-Costa, Catalina; Cornet, Ronald; Karlsson, Daniel; Schulz, Stefan; Kalra, Dipak
2015-05-01
To improve semantic interoperability of electronic health records (EHRs) by ontology-based mediation across syntactically heterogeneous representations of the same or similar clinical information. Our approach is based on a semantic layer that consists of: (1) a set of ontologies supported by (2) a set of semantic patterns. The first aspect of the semantic layer helps standardize the clinical information modeling task and the second shields modelers from the complexity of ontology modeling. We applied this approach to heterogeneous representations of an excerpt of a heart failure summary. Using a set of finite top-level patterns to derive semantic patterns, we demonstrate that those patterns, or compositions thereof, can be used to represent information from clinical models. Homogeneous querying of the same or similar information, when represented according to heterogeneous clinical models, is feasible. Our approach focuses on the meaning embedded in EHRs, regardless of their structure. This complex task requires a clear ontological commitment (ie, agreement to consistently use the shared vocabulary within some context), together with formalization rules. These requirements are supported by semantic patterns. Other potential uses of this approach, such as clinical models validation, require further investigation. We show how an ontology-based representation of a clinical summary, guided by semantic patterns, allows homogeneous querying of heterogeneous information structures. Whether there are a finite number of top-level patterns is an open question. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
An Enriched Unified Medical Language System Semantic Network with a Multiple Subsumption Hierarchy
Zhang, Li; Perl, Yehoshua; Halper, Michael; Geller, James; Cimino, James J.
2004-01-01
Objective: The Unified Medical Language System's (UMLS's) Semantic Network's (SN's) two-tree structure is restrictive because it does not allow a semantic type to be a specialization of several other semantic types. In this article, the SN is expanded into a multiple subsumption structure with a directed acyclic graph (DAG) IS-A hierarchy, allowing a semantic type to have multiple parents. New viable IS-A links are added as warranted. Design: Two methodologies are presented to identify and add new viable IS-A links. The first methodology is based on imposing the characteristic of connectivity on a previously presented partition of the SN. Four transformations are provided to find viable IS-A links in the process of converting the partition's disconnected groups into connected ones. The second methodology identifies new IS-A links through a string matching process involving names and definitions of various semantic types in the SN. A domain expert is needed to review all the results to determine the validity of the new IS-A links. Results: Nineteen new IS-A links are added to the SN, and four new semantic types are also created to support the multiple subsumption framework. The resulting network, called the Enriched Semantic Network (ESN), exhibits a DAG-structured hierarchy. A partition of the ESN containing 19 connected groups is also derived. Conclusion: The ESN is an expanded abstraction of the UMLS compared with the original SN. Its multiple subsumption hierarchy can accommodate semantic types with multiple parents. Its representation thus provides direct access to a broader range of subsumption knowledge. PMID:14764611
Semantic Web-based Vocabulary Broker for Open Science
NASA Astrophysics Data System (ADS)
Ritschel, B.; Neher, G.; Iyemori, T.; Murayama, Y.; Kondo, Y.; Koyama, Y.; King, T. A.; Galkin, I. A.; Fung, S. F.; Wharton, S.; Cecconi, B.
2016-12-01
Keyword vocabularies are used to tag and to identify data of science data repositories. Such vocabularies consist of controlled terms and the appropriate concepts, such as GCMD1 keywords or the ESPAS2 keyword ontology. The Semantic Web-based mash-up of domain-specific, cross- or even trans-domain vocabularies provides unique capabilities in the network of appropriate data resources. Based on a collaboration between GFZ3, the FHP4, the WDC for Geomagnetism5 and the NICT6 we developed the concept of a vocabulary broker for inter- and trans-disciplinary data detection and integration. Our prototype of the Semantic Web-based vocabulary broker uses OSF7 for the mash-up of geo and space research vocabularies, such as GCMD keywords, ESPAS keyword ontology and SPASE8 keyword vocabulary. The vocabulary broker starts the search with "free" keywords or terms of a specific vocabulary scheme. The vocabulary broker almost automatically connects the different science data repositories which are tagged by terms of the aforementioned vocabularies. Therefore the mash-up of the SKOS9 based vocabularies with appropriate metadata from different domains can be realized by addressing LOD10 resources or virtual SPARQL11 endpoints which maps relational structures into the RDF format12. In order to demonstrate such a mash-up approach in real life, we installed and use a D2RQ13 server for the integration of IUGONET14 data which are managed by a relational database. The OSF based vocabulary broker and the D2RQ platform are installed at virtual LINUX machines at the Kyoto University. The vocabulary broker meets the standard of a main component of the WDS15 knowledge network. The Web address of the vocabulary broker is http://wdcosf.kugi.kyoto-u.ac.jp 1 Global Change Master Directory2 Near earth space data infrastructure for e-science3 German Research Centre for Geosciences4 University of Applied Sciences Potsdam5 World Data Center for Geomagnetism Kyoto6 National Institute of Information and Communications Technology Tokyo7 Open Semantic Framework8 Space Physics Archive Search and Extract9 Simple Knowledge Organization System10 Linked Open Data11 SPARQL Protocol And RDF Query12 Resource Description Framework13 Database to RDF Query14 Inter-university Upper atmosphere Global Observation NETwork15 World Data System
Grethe, Jeffrey S; Ross, Edward; Little, David; Sanders, Brian; Gupta, Amarnath; Astakhov, Vadim
2009-01-01
This paper presents current progress in the development of semantic data integration environment which is a part of the Biomedical Informatics Research Network (BIRN; http://www.nbirn.net) project. BIRN is sponsored by the National Center for Research Resources (NCRR), a component of the National Institutes of Health (NIH). A goal is the development of a cyberinfrastructure for biomedical research that supports advance data acquisition, data storage, data management, data integration, data mining, data visualization, and other computing and information processing services over the Internet. Each participating institution maintains storage of their experimental or computationally derived data. Mediator-based data integration system performs semantic integration over the databases to enable researchers to perform analyses based on larger and broader datasets than would be available from any single institution's data. This paper describes recent revision of the system architecture, implementation, and capabilities of the semantically based data integration environment for BIRN.
Park, Hyojung; Shin, Sunhwa
2015-12-01
The purpose of this study was to develop and test a semantic differential scale of sexual attitudes for older people in Korea. The scale was based on items derived from a literature review and focus group interviews. A methodological study was used to test the reliability and validity of the instrument. A total of 368 older men and women were recruited to complete the semantic differential scale. Fifteen pairs of adjective ratings were extracted through factor analysis. Total variance explained was 63.40%. To test for construct validity, group comparisons were implemented. The total score of sexual attitudes showed significant differences depending on gender and availability of sexual activity. Cronbach's alpha coefficient for internal consistency was 0.96. The findings of this study demonstrate that the semantic differential scale of sexual attitude is a reliable and valid instrument. © 2015 Wiley Publishing Asia Pty Ltd.
Kintsch, Walter; Mangalath, Praful
2011-04-01
We argue that word meanings are not stored in a mental lexicon but are generated in the context of working memory from long-term memory traces that record our experience with words. Current statistical models of semantics, such as latent semantic analysis and the Topic model, describe what is stored in long-term memory. The CI-2 model describes how this information is used to construct sentence meanings. This model is a dual-memory model, in that it distinguishes between a gist level and an explicit level. It also incorporates syntactic information about how words are used, derived from dependency grammar. The construction of meaning is conceptualized as feature sampling from the explicit memory traces, with the constraint that the sampling must be contextually relevant both semantically and syntactically. Semantic relevance is achieved by sampling topically relevant features; local syntactic constraints as expressed by dependency relations ensure syntactic relevance. Copyright © 2010 Cognitive Science Society, Inc.
Semantically induced distortions of visual awareness in a patient with Balint's syndrome.
Soto, David; Humphreys, Glyn W
2009-02-01
We present data indicating that visual awareness for a basic perceptual feature (colour) can be influenced by the relation between the feature and the semantic properties of the stimulus. We examined semantic interference from the meaning of a colour word (''RED") on simple colour (ink related) detection responses in a patient with simultagnosia due to bilateral parietal lesions. We found that colour detection was influenced by the congruency between the meaning of the word and the relevant ink colour, with impaired performance when the word and the colour mismatched (on incongruent trials). This result held even when remote associations between meaning and colour were used (i.e. the word ''PEA" influenced detection of the ink colour red). The results are consistent with a late locus of conscious visual experience that is derived at post-semantic levels. The implications for the understanding of the role of parietal cortex in object binding and visual awareness are discussed.
Memory for pictures and words as a function of level of processing: Depth or dual coding?
D'Agostino, P R; O'Neill, B J; Paivio, A
1977-03-01
The experiment was designed to test differential predictions derived from dual-coding and depth-of-processing hypotheses. Subjects under incidental memory instructions free recalled a list of 36 test events, each presented twice. Within the list, an equal number of events were assigned to structural, phonemic, and semantic processing conditions. Separate groups of subjects were tested with a list of pictures, concrete words, or abstract words. Results indicated that retention of concrete words increased as a direct function of the processing-task variable (structural < phonemic
Muscle and eye movement artifact removal prior to EEG source localization.
Hallez, Hans; Vergult, Anneleen; Phlypo, Ronald; Van Hese, Peter; De Clercq, Wim; D'Asseler, Yves; Van de Walle, Rik; Vanrumste, Bart; Van Paesschen, Wim; Van Huffel, Sabine; Lemahieu, Ignace
2006-01-01
Muscle and eye movement artifacts are very prominent in the ictal EEG of patients suffering from epilepsy, thus making the dipole localization of ictal activity very unreliable. Recently, two techniques (BSS-CCA and pSVD) were developed to remove those artifacts. The purpose of this study is to assess whether the removal of muscle and eye movement artifacts improves the EEG dipole source localization. We used a total of 8 EEG fragments, each from another patient, first unfiltered, then filtered by the BSS-CCA and pSVD. In both the filtered and unfiltered EEG fragments we estimated multiple dipoles using RAP-MUSIC. The resulting dipoles were subjected to a K-means clustering algorithm, to extract the most prominent cluster. We found that the removal of muscle and eye artifact results to tighter and more clear dipole clusters. Furthermore, we found that localization of the filtered EEG corresponded with the localization derived from the ictal SPECT in 7 of the 8 patients. Therefore, we can conclude that the BSS-CCA and pSVD improve localization of ictal activity, thus making the localization more reliable for the presurgical evaluation of the patient.
Mauldin, F William; Owen, Kevin; Tiouririne, Mohamed; Hossack, John A
2012-06-01
The portability, low cost, and non-ionizing radiation associated with medical ultrasound suggest that it has potential as a superior alternative to X-ray for bone imaging. However, when conventional ultrasound imaging systems are used for bone imaging, clinical acceptance is frequently limited by artifacts derived from reflections occurring away from the main axis of the acoustic beam. In this paper, the physical source of off-axis artifacts and the effect of transducer geometry on these artifacts are investigated in simulation and experimental studies. In agreement with diffraction theory, the sampled linear-array geometry possessed increased off-axis energy compared with single-element piston geometry, and therefore, exhibited greater levels of artifact signal. Simulation and experimental results demonstrated that the linear-array geometry exhibited increased artifact signal when the center frequency increased, when energy off-axis to the main acoustic beam (i.e., grating lobes) was perpendicularly incident upon off-axis surfaces, and when off-axis surfaces were specular rather than diffusive. The simulation model used to simulate specular reflections was validated experimentally and a correlation coefficient of 0.97 between experimental and simulated peak reflection contrast was observed. In ex vivo experiments, the piston geometry yielded 4 and 6.2 dB average contrast improvement compared with the linear array when imaging the spinous process and interlaminar space of an animal spine, respectively. This work indicates that off-axis reflections are a major source of ultrasound image artifacts, particularly in environments comprising specular reflecting (i.e., bone or bone-like) objects. Transducer geometries with reduced sensitivity to off-axis surface reflections, such as a piston transducer geometry, yield significant reductions in image artifact.
Protein-protein interaction inference based on semantic similarity of Gene Ontology terms.
Zhang, Shu-Bo; Tang, Qiang-Rong
2016-07-21
Identifying protein-protein interactions is important in molecular biology. Experimental methods to this issue have their limitations, and computational approaches have attracted more and more attentions from the biological community. The semantic similarity derived from the Gene Ontology (GO) annotation has been regarded as one of the most powerful indicators for protein interaction. However, conventional methods based on GO similarity fail to take advantage of the specificity of GO terms in the ontology graph. We proposed a GO-based method to predict protein-protein interaction by integrating different kinds of similarity measures derived from the intrinsic structure of GO graph. We extended five existing methods to derive the semantic similarity measures from the descending part of two GO terms in the GO graph, then adopted a feature integration strategy to combines both the ascending and the descending similarity scores derived from the three sub-ontologies to construct various kinds of features to characterize each protein pair. Support vector machines (SVM) were employed as discriminate classifiers, and five-fold cross validation experiments were conducted on both human and yeast protein-protein interaction datasets to evaluate the performance of different kinds of integrated features, the experimental results suggest the best performance of the feature that combines information from both the ascending and the descending parts of the three ontologies. Our method is appealing for effective prediction of protein-protein interaction. Copyright © 2016 Elsevier Ltd. All rights reserved.
Bhaganagarapu, Kaushik; Jackson, Graeme D; Abbott, David F
2013-01-01
An enduring issue with data-driven analysis and filtering methods is the interpretation of results. To assist, we present an automatic method for identification of artifact in independent components (ICs) derived from functional MRI (fMRI). The method was designed with the following features: does not require temporal information about an fMRI paradigm; does not require the user to train the algorithm; requires only the fMRI images (additional acquisition of anatomical imaging not required); is able to identify a high proportion of artifact-related ICs without removing components that are likely to be of neuronal origin; can be applied to resting-state fMRI; is automated, requiring minimal or no human intervention. We applied the method to a MELODIC probabilistic ICA of resting-state functional connectivity data acquired in 50 healthy control subjects, and compared the results to a blinded expert manual classification. The method identified between 26 and 72% of the components as artifact (mean 55%). About 0.3% of components identified as artifact were discordant with the manual classification; retrospective examination of these ICs suggested the automated method had correctly identified these as artifact. We have developed an effective automated method which removes a substantial number of unwanted noisy components in ICA analyses of resting-state fMRI data. Source code of our implementation of the method is available.
An Automated Method for Identifying Artifact in Independent Component Analysis of Resting-State fMRI
Bhaganagarapu, Kaushik; Jackson, Graeme D.; Abbott, David F.
2013-01-01
An enduring issue with data-driven analysis and filtering methods is the interpretation of results. To assist, we present an automatic method for identification of artifact in independent components (ICs) derived from functional MRI (fMRI). The method was designed with the following features: does not require temporal information about an fMRI paradigm; does not require the user to train the algorithm; requires only the fMRI images (additional acquisition of anatomical imaging not required); is able to identify a high proportion of artifact-related ICs without removing components that are likely to be of neuronal origin; can be applied to resting-state fMRI; is automated, requiring minimal or no human intervention. We applied the method to a MELODIC probabilistic ICA of resting-state functional connectivity data acquired in 50 healthy control subjects, and compared the results to a blinded expert manual classification. The method identified between 26 and 72% of the components as artifact (mean 55%). About 0.3% of components identified as artifact were discordant with the manual classification; retrospective examination of these ICs suggested the automated method had correctly identified these as artifact. We have developed an effective automated method which removes a substantial number of unwanted noisy components in ICA analyses of resting-state fMRI data. Source code of our implementation of the method is available. PMID:23847511
Long-term oscillations of sunspots and a special class of artifacts in SOHO/MDI and SDO/HMI data
NASA Astrophysics Data System (ADS)
Efremov, V. I.; Solov'ev, A. A.; Parfinenko, L. D.; Riehokainen, A.; Kirichek, E.; Smirnova, V. V.; Varun, Y. N.; Bakunina, I.; Zhivanovich, I.
2018-03-01
A specific type of artifacts (named as " p2p"), that originate due to displacement of the image of a moving object along the digital (pixel) matrix of receiver are analyzed in detail. The criteria of appearance and the influence of these artifacts on the study of long-term oscillations of sunspots are deduced. The obtained criteria suggest us methods for reduction or even elimination of these artifacts. It is shown that the use of integral parameters can be very effective against the " p2p" artifact distortions. The simultaneous observations of sunspot magnetic field and ultraviolet intensity of the umbra have given the same periods for the long-term oscillations. In this way the real physical nature of the oscillatory process, which is independent of the artifacts have been confirmed again. A number of examples considered here confirm the dependence between the periods of main mode of the sunspot magnetic field long-term oscillations and its strength. The dependence was derived earlier from both the observations and the theoretical model of the shallow sunspot. The anti-phase behavior of time variations of sunspot umbra area and magnetic field of the sunspot demonstrates that the umbra of sunspot moves in long-term oscillations as a whole: all its points oscillate with the same phase.
Derivation and evaluation of a labeled hedonic scale.
Lim, Juyun; Wood, Alison; Green, Barry G
2009-11-01
The objective of this study was to develop a semantically labeled hedonic scale (LHS) that would yield ratio-level data on the magnitude of liking/disliking of sensation equivalent to that produced by magnitude estimation (ME). The LHS was constructed by having 49 subjects who were trained in ME rate the semantic magnitudes of 10 common hedonic descriptors within a broad context of imagined hedonic experiences that included tastes and flavors. The resulting bipolar scale is statistically symmetrical around neutral and has a unique semantic structure. The LHS was evaluated quantitatively by comparing it with ME and the 9-point hedonic scale. The LHS yielded nearly identical ratings to those obtained using ME, which implies that its semantic labels are valid and that it produces ratio-level data equivalent to ME. Analyses of variance conducted on the hedonic ratings from the LHS and the 9-point scale gave similar results, but the LHS showed much greater resistance to ceiling effects and yielded normally distributed data, whereas the 9-point scale did not. These results indicate that the LHS has significant semantic, quantitative, and statistical advantages over the 9-point hedonic scale.
Whole-Brain In-vivo Measurements of the Axonal G-Ratio in a Group of 37 Healthy Volunteers
Mohammadi, Siawoosh; Carey, Daniel; Dick, Fred; Diedrichsen, Joern; Sereno, Martin I.; Reisert, Marco; Callaghan, Martina F.; Weiskopf, Nikolaus
2015-01-01
The g-ratio, quantifying the ratio between the inner and outer diameters of a fiber, is an important microstructural characteristic of fiber pathways and is functionally related to conduction velocity. We introduce a novel method for estimating the MR g-ratio non-invasively across the whole brain using high-fidelity magnetization transfer (MT) imaging and single-shell diffusion MRI. These methods enabled us to map the MR g-ratio in vivo across the brain's prominent fiber pathways in a group of 37 healthy volunteers and to estimate the inter-subject variability. Effective correction of susceptibility-related distortion artifacts was essential before combining the MT and diffusion data, in order to reduce partial volume and edge artifacts. The MR g-ratio is in good qualitative agreement with histological findings despite the different resolution and spatial coverage of MRI and histology. The MR g-ratio holds promise as an important non-invasive biomarker due to its microstructural and functional relevance in neurodegeneration. PMID:26640427
Assigning clinical codes with data-driven concept representation on Dutch clinical free text.
Scheurwegs, Elyne; Luyckx, Kim; Luyten, Léon; Goethals, Bart; Daelemans, Walter
2017-05-01
Clinical codes are used for public reporting purposes, are fundamental to determining public financing for hospitals, and form the basis for reimbursement claims to insurance providers. They are assigned to a patient stay to reflect the diagnosis and performed procedures during that stay. This paper aims to enrich algorithms for automated clinical coding by taking a data-driven approach and by using unsupervised and semi-supervised techniques for the extraction of multi-word expressions that convey a generalisable medical meaning (referred to as concepts). Several methods for extracting concepts from text are compared, two of which are constructed from a large unannotated corpus of clinical free text. A distributional semantic model (i.c. the word2vec skip-gram model) is used to generalize over concepts and retrieve relations between them. These methods are validated on three sets of patient stay data, in the disease areas of urology, cardiology, and gastroenterology. The datasets are in Dutch, which introduces a limitation on available concept definitions from expert-based ontologies (e.g. UMLS). The results show that when expert-based knowledge in ontologies is unavailable, concepts derived from raw clinical texts are a reliable alternative. Both concepts derived from raw clinical texts perform and concepts derived from expert-created dictionaries outperform a bag-of-words approach in clinical code assignment. Adding features based on tokens that appear in a semantically similar context has a positive influence for predicting diagnostic codes. Furthermore, the experiments indicate that a distributional semantics model can find relations between semantically related concepts in texts but also introduces erroneous and redundant relations, which can undermine clinical coding performance. Copyright © 2017. Published by Elsevier Inc.
Vecchi, Eva M; Marelli, Marco; Zamparelli, Roberto; Baroni, Marco
2017-01-01
Sophisticated senator and legislative onion. Whether or not you have ever heard of these things, we all have some intuition that one of them makes much less sense than the other. In this paper, we introduce a large dataset of human judgments about novel adjective-noun phrases. We use these data to test an approach to semantic deviance based on phrase representations derived with compositional distributional semantic methods, that is, methods that derive word meanings from contextual information, and approximate phrase meanings by combining word meanings. We present several simple measures extracted from distributional representations of words and phrases, and we show that they have a significant impact on predicting the acceptability of novel adjective-noun phrases even when a number of alternative measures classically employed in studies of compound processing and bigram plausibility are taken into account. Our results show that the extent to which an attributive adjective alters the distributional representation of the noun is the most significant factor in modeling the distinction between acceptable and deviant phrases. Our study extends current applications of compositional distributional semantic methods to linguistically and cognitively interesting problems, and it offers a new, quantitatively precise approach to the challenge of predicting when humans will find novel linguistic expressions acceptable and when they will not. Copyright © 2016 Cognitive Science Society, Inc.
Primativo, Silvia; Reilly, Jamie; Crutch, Sebastian J
2016-01-01
The Abstract Conceptual Feature (ACF) framework predicts that word meaning is represented within a high-dimensional semantic space bounded by weighted contributions of perceptual, affective, and encyclopedic information. The ACF, like latent semantic analysis, is amenable to distance metrics between any two words. We applied predictions of the ACF framework to abstract words using eye tracking via an adaptation of the classical ‘visual word paradigm’. Healthy adults (N=20) selected the lexical item most related to a probe word in a 4-item written word array comprising the target and three distractors. The relation between the probe and each of the four words was determined using the semantic distance metrics derived from ACF ratings. Eye-movement data indicated that the word that was most semantically related to the probe received more and longer fixations relative to distractors. Importantly, in sets where participants did not provide an overt behavioral response, the fixation rates were none the less significantly higher for targets than distractors, closely resembling trials where an expected response was given. Furthermore, ACF ratings which are based on individual words predicted eye fixation metrics of probe-target similarity at least as well as latent semantic analysis ratings which are based on word co-occurrence. The results provide further validation of Euclidean distance metrics derived from ACF ratings as a measure of one facet of the semantic relatedness of abstract words and suggest that they represent a reasonable approximation of the organization of abstract conceptual space. The data are also compatible with the broad notion that multiple sources of information (not restricted to sensorimotor and emotion information) shape the organization of abstract concepts. Whilst the adapted ‘visual word paradigm’ is potentially a more metacognitive task than the classical visual world paradigm, we argue that it offers potential utility for studying abstract word comprehension. PMID:26901571
Chen, Jianhuai; Yao, Zhijian; Qin, Jiaolong; Yan, Rui; Hua, Lingling; Lu, Qing
2015-06-25
The human brain is a complex network of regions that are structurally interconnected by white matter (WM) tracts. Schizophrenia (SZ) can be conceptualized as a disconnection syndrome characterized by widespread disconnections in WM pathways. To assess whether or not anatomical disconnections are associated with disruption of the topological properties of inter- and intra-hemispheric networks in SZ. We acquired the diffusion tensor imaging data from 24 male patients with paranoid SZ during an acute phase of their illness and from 24 healthy age-matched male controls. The brain FA-weighted (fractional anisotropy-weighted) structural networks were constructed and the inter- and intra-hemispheric integration was assessed by estimating the average characteristic path lengths (CPLs) between and within the left and right hemisphere networks. The mean CPLs for all 18 inter-and intra-hemispheric CPLs assessed were longer in the SZ patient group than in the control group, but only some of these differences were significantly different: the CPLs for the overall inter-hemispheric and the left and right intra-hemispheric networks; the CPLs for the interhemisphere subnetworks of the frontal lobes, temporal lobes, and subcortical structures; and the CPL for the intra- frontal subnetwork in the right hemisphere. Among the 24 patients, the CPL of the inter-frontal subnetwork was positively associated with negative symptom severity, but this was the only significant result among 72 assessed correlations, so it may be a statistical artifact. Our findings suggest that the integrity of intra- and inter-hemispheric WM tracts is disrupted in males with paranoid SZ, supporting the brain network disconnection model (i.e., the (')connectivity hypothesis(')) of schizophrenia. Larger studies with less narrowly defined samples of individuals with schizophrenia are needed to confirm these results.
McArdle, J J; Mari, Z; Pursley, R H; Schulz, G M; Braun, A R
2009-02-01
We investigated whether the Bereitschaftspotential (BP), an event related potential believed to reflect motor planning, would be modulated by language-related parameters prior to speech. We anticipated that articulatory complexity would produce effects on the BP distribution similar to those demonstrated for complex limb movements. We also hypothesized that lexical semantic operations would independently impact the BP. Eighteen participants performed 3 speech tasks designed to differentiate lexical semantic and articulatory contributions to the BP. EEG epochs were time-locked to the earliest source of speech movement per trial. Lip movements were assessed using EMG recordings. Doppler imaging was used to determine the onset of tongue movement during speech, providing a means of identification and elimination of potential artifact. Compared to simple repetition, complex articulations produced an anterior shift in the maximum midline BP. Tasks requiring lexical search and selection augmented these effects and independently elicited a left lateralized asymmetry in the frontal distribution. The findings indicate that the BP is significantly modulated by linguistic processing, suggesting that the premotor system might play a role in lexical access. These novel findings support the notion that the motor systems may play a significant role in the formulation of language.
Semantically-enabled Knowledge Discovery in the Deep Carbon Observatory
NASA Astrophysics Data System (ADS)
Wang, H.; Chen, Y.; Ma, X.; Erickson, J. S.; West, P.; Fox, P. A.
2013-12-01
The Deep Carbon Observatory (DCO) is a decadal effort aimed at transforming scientific and public understanding of carbon in the complex deep earth system from the perspectives of Deep Energy, Deep Life, Extreme Physics and Chemistry, and Reservoirs and Fluxes. Over the course of the decade DCO scientific activities will generate a massive volume of data across a variety of disciplines, presenting significant challenges in terms of data integration, management, analysis and visualization, and ultimately limiting the ability of scientists across disciplines to make insights and unlock new knowledge. The DCO Data Science Team (DCO-DS) is applying Semantic Web methodologies to construct a knowledge representation focused on the DCO Earth science disciplines, and use it together with other technologies (e.g. natural language processing and data mining) to create a more expressive representation of the distributed corpus of DCO artifacts including datasets, metadata, instruments, sensors, platforms, deployments, researchers, organizations, funding agencies, grants and various awards. The embodiment of this knowledge representation is the DCO Data Science Infrastructure, in which unique entities within the DCO domain and the relations between them are recognized and explicitly identified. The DCO-DS Infrastructure will serve as a platform for more efficient and reliable searching, discovery, access, and publication of information and knowledge for the DCO scientific community and beyond.
Dugas, Martin; Meidt, Alexandra; Neuhaus, Philipp; Storck, Michael; Varghese, Julian
2016-06-01
The volume and complexity of patient data - especially in personalised medicine - is steadily increasing, both regarding clinical data and genomic profiles: Typically more than 1,000 items (e.g., laboratory values, vital signs, diagnostic tests etc.) are collected per patient in clinical trials. In oncology hundreds of mutations can potentially be detected for each patient by genomic profiling. Therefore data integration from multiple sources constitutes a key challenge for medical research and healthcare. Semantic annotation of data elements can facilitate to identify matching data elements in different sources and thereby supports data integration. Millions of different annotations are required due to the semantic richness of patient data. These annotations should be uniform, i.e., two matching data elements shall contain the same annotations. However, large terminologies like SNOMED CT or UMLS don't provide uniform coding. It is proposed to develop semantic annotations of medical data elements based on a large-scale public metadata repository. To achieve uniform codes, semantic annotations shall be re-used if a matching data element is available in the metadata repository. A web-based tool called ODMedit ( https://odmeditor.uni-muenster.de/ ) was developed to create data models with uniform semantic annotations. It contains ~800,000 terms with semantic annotations which were derived from ~5,800 models from the portal of medical data models (MDM). The tool was successfully applied to manually annotate 22 forms with 292 data items from CDISC and to update 1,495 data models of the MDM portal. Uniform manual semantic annotation of data models is feasible in principle, but requires a large-scale collaborative effort due to the semantic richness of patient data. A web-based tool for these annotations is available, which is linked to a public metadata repository.
Sihong Chen; Jing Qin; Xing Ji; Baiying Lei; Tianfu Wang; Dong Ni; Jie-Zhi Cheng
2017-03-01
The gap between the computational and semantic features is the one of major factors that bottlenecks the computer-aided diagnosis (CAD) performance from clinical usage. To bridge this gap, we exploit three multi-task learning (MTL) schemes to leverage heterogeneous computational features derived from deep learning models of stacked denoising autoencoder (SDAE) and convolutional neural network (CNN), as well as hand-crafted Haar-like and HoG features, for the description of 9 semantic features for lung nodules in CT images. We regard that there may exist relations among the semantic features of "spiculation", "texture", "margin", etc., that can be explored with the MTL. The Lung Image Database Consortium (LIDC) data is adopted in this study for the rich annotation resources. The LIDC nodules were quantitatively scored w.r.t. 9 semantic features from 12 radiologists of several institutes in U.S.A. By treating each semantic feature as an individual task, the MTL schemes select and map the heterogeneous computational features toward the radiologists' ratings with cross validation evaluation schemes on the randomly selected 2400 nodules from the LIDC dataset. The experimental results suggest that the predicted semantic scores from the three MTL schemes are closer to the radiologists' ratings than the scores from single-task LASSO and elastic net regression methods. The proposed semantic attribute scoring scheme may provide richer quantitative assessments of nodules for better support of diagnostic decision and management. Meanwhile, the capability of the automatic association of medical image contents with the clinical semantic terms by our method may also assist the development of medical search engine.
Tao, Ran; Fletcher, P Thomas; Gerber, Samuel; Whitaker, Ross T
2009-01-01
This paper presents a method for correcting the geometric and greyscale distortions in diffusion-weighted MRI that result from inhomogeneities in the static magnetic field. These inhomogeneities may due to imperfections in the magnet or to spatial variations in the magnetic susceptibility of the object being imaged--so called susceptibility artifacts. Echo-planar imaging (EPI), used in virtually all diffusion weighted acquisition protocols, assumes a homogeneous static field, which generally does not hold for head MRI. The resulting distortions are significant, sometimes more than ten millimeters. These artifacts impede accurate alignment of diffusion images with structural MRI, and are generally considered an obstacle to the joint analysis of connectivity and structure in head MRI. In principle, susceptibility artifacts can be corrected by acquiring (and applying) a field map. However, as shown in the literature and demonstrated in this paper, field map corrections of susceptibility artifacts are not entirely accurate and reliable, and thus field maps do not produce reliable alignment of EPIs with corresponding structural images. This paper presents a new, image-based method for correcting susceptibility artifacts. The method relies on a variational formulation of the match between an EPI baseline image and a corresponding T2-weighted structural image but also specifically accounts for the physics of susceptibility artifacts. We derive a set of partial differential equations associated with the optimization, describe the numerical methods for solving these equations, and present results that demonstrate the effectiveness of the proposed method compared with field-map correction.
NASA Technical Reports Server (NTRS)
Colarco, P. R.; Kahn, R. A.; Remer, L. A.; Levy, R. C.
2014-01-01
We use the Moderate Resolution Imaging Spectroradiometer (MODIS) satellite aerosol optical thickness (AOT) product to assess the impact of reduced swath width on global and regional AOT statistics and trends. Alongtrack and across-track sampling strategies are employed, in which the full MODIS data set is sub-sampled with various narrow-swath (approximately 400-800 km) and single pixel width (approximately 10 km) configurations. Although view-angle artifacts in the MODIS AOT retrieval confound direct comparisons between averages derived from different sub-samples, careful analysis shows that with many portions of the Earth essentially unobserved, spatial sampling introduces uncertainty in the derived seasonal-regional mean AOT. These AOT spatial sampling artifacts comprise up to 60%of the full-swath AOT value under moderate aerosol loading, and can be as large as 0.1 in some regions under high aerosol loading. Compared to full-swath observations, narrower swath and single pixel width sampling exhibits a reduced ability to detect AOT trends with statistical significance. On the other hand, estimates of the global, annual mean AOT do not vary significantly from the full-swath values as spatial sampling is reduced. Aggregation of the MODIS data at coarse grid scales (10 deg) shows consistency in the aerosol trends across sampling strategies, with increased statistical confidence, but quantitative errors in the derived trends are found even for the full-swath data when compared to high spatial resolution (0.5 deg) aggregations. Using results of a model-derived aerosol reanalysis, we find consistency in our conclusions about a seasonal-regional spatial sampling artifact in AOT Furthermore, the model shows that reduced spatial sampling can amount to uncertainty in computed shortwave top-ofatmosphere aerosol radiative forcing of 2-3 W m(sup-2). These artifacts are lower bounds, as possibly other unconsidered sampling strategies would perform less well. These results suggest that future aerosol satellite missions having significantly less than full-swath viewing are unlikely to sample the true AOT distribution well enough to obtain the statistics needed to reduce uncertainty in aerosol direct forcing of climate.
Using archetypes for defining CDA templates.
Moner, David; Moreno, Alberto; Maldonado, José A; Robles, Montserrat; Parra, Carlos
2012-01-01
While HL7 CDA is a widely adopted standard for the documentation of clinical information, the archetype approach proposed by CEN/ISO 13606 and openEHR is gaining recognition as a means of describing domain models and medical knowledge. This paper describes our efforts in combining both standards. Using archetypes as an alternative for defining CDA templates permit new possibilities all based on the formal nature of archetypes and their ability to merge into the same artifact medical knowledge and technical requirements for semantic interoperability of electronic health records. We describe the process followed for the normalization of existing legacy data in a hospital environment, from the importation of the HL7 CDA model into an archetype editor, the definition of CDA archetypes and the application of those archetypes to obtain normalized CDA data instances.
A concept ideation framework for medical device design.
Hagedorn, Thomas J; Grosse, Ian R; Krishnamurty, Sundar
2015-06-01
Medical device design is a challenging process, often requiring collaboration between medical and engineering domain experts. This collaboration can be best institutionalized through systematic knowledge transfer between the two domains coupled with effective knowledge management throughout the design innovation process. Toward this goal, we present the development of a semantic framework for medical device design that unifies a large medical ontology with detailed engineering functional models along with the repository of design innovation information contained in the US Patent Database. As part of our development, existing medical, engineering, and patent document ontologies were modified and interlinked to create a comprehensive medical device innovation and design tool with appropriate properties and semantic relations to facilitate knowledge capture, enrich existing knowledge, and enable effective knowledge reuse for different scenarios. The result is a Concept Ideation Framework for Medical Device Design (CIFMeDD). Key features of the resulting framework include function-based searching and automated inter-domain reasoning to uniquely enable identification of functionally similar procedures, tools, and inventions from multiple domains based on simple semantic searches. The significance and usefulness of the resulting framework for aiding in conceptual design and innovation in the medical realm are explored via two case studies examining medical device design problems. Copyright © 2015 Elsevier Inc. All rights reserved.
[Medical Image Registration Method Based on a Semantic Model with Directional Visual Words].
Jin, Yufei; Ma, Meng; Yang, Xin
2016-04-01
Medical image registration is very challenging due to the various imaging modality,image quality,wide inter-patients variability,and intra-patient variability with disease progressing of medical images,with strict requirement for robustness.Inspired by semantic model,especially the recent tremendous progress in computer vision tasks under bag-of-visual-word framework,we set up a novel semantic model to match medical images.Since most of medical images have poor contrast,small dynamic range,and involving only intensities and so on,the traditional visual word models do not perform very well.To benefit from the advantages from the relative works,we proposed a novel visual word model named directional visual words,which performs better on medical images.Then we applied this model to do medical registration.In our experiment,the critical anatomical structures were first manually specified by experts.Then we adopted the directional visual word,the strategy of spatial pyramid searching from coarse to fine,and the k-means algorithm to help us locating the positions of the key structures accurately.Sequentially,we shall register corresponding images by the areas around these positions.The results of the experiments which were performed on real cardiac images showed that our method could achieve high registration accuracy in some specific areas.
Jumping across biomedical contexts using compressive data fusion
Zitnik, Marinka; Zupan, Blaz
2016-01-01
Motivation: The rapid growth of diverse biological data allows us to consider interactions between a variety of objects, such as genes, chemicals, molecular signatures, diseases, pathways and environmental exposures. Often, any pair of objects—such as a gene and a disease—can be related in different ways, for example, directly via gene–disease associations or indirectly via functional annotations, chemicals and pathways. Different ways of relating these objects carry different semantic meanings. However, traditional methods disregard these semantics and thus cannot fully exploit their value in data modeling. Results: We present Medusa, an approach to detect size-k modules of objects that, taken together, appear most significant to another set of objects. Medusa operates on large-scale collections of heterogeneous datasets and explicitly distinguishes between diverse data semantics. It advances research along two dimensions: it builds on collective matrix factorization to derive different semantics, and it formulates the growing of the modules as a submodular optimization program. Medusa is flexible in choosing or combining semantic meanings and provides theoretical guarantees about detection quality. In a systematic study on 310 complex diseases, we show the effectiveness of Medusa in associating genes with diseases and detecting disease modules. We demonstrate that in predicting gene–disease associations Medusa compares favorably to methods that ignore diverse semantic meanings. We find that the utility of different semantics depends on disease categories and that, overall, Medusa recovers disease modules more accurately when combining different semantics. Availability and implementation: Source code is at http://github.com/marinkaz/medusa Contact: marinka@cs.stanford.edu, blaz.zupan@fri.uni-lj.si PMID:27307649
Hoffman, Paul; Lambon Ralph, Matthew A; Rogers, Timothy T
2013-09-01
Semantic ambiguity is typically measured by summing the number of senses or dictionary definitions that a word has. Such measures are somewhat subjective and may not adequately capture the full extent of variation in word meaning, particularly for polysemous words that can be used in many different ways, with subtle shifts in meaning. Here, we describe an alternative, computationally derived measure of ambiguity based on the proposal that the meanings of words vary continuously as a function of their contexts. On this view, words that appear in a wide range of contexts on diverse topics are more variable in meaning than those that appear in a restricted set of similar contexts. To quantify this variation, we performed latent semantic analysis on a large text corpus to estimate the semantic similarities of different linguistic contexts. From these estimates, we calculated the degree to which the different contexts associated with a given word vary in their meanings. We term this quantity a word's semantic diversity (SemD). We suggest that this approach provides an objective way of quantifying the subtle, context-dependent variations in word meaning that are often present in language. We demonstrate that SemD is correlated with other measures of ambiguity and contextual variability, as well as with frequency and imageability. We also show that SemD is a strong predictor of performance in semantic judgments in healthy individuals and in patients with semantic deficits, accounting for unique variance beyond that of other predictors. SemD values for over 30,000 English words are provided as supplementary materials.
Context-rich semantic framework for effective data-to-decisions in coalition networks
NASA Astrophysics Data System (ADS)
Grueneberg, Keith; de Mel, Geeth; Braines, Dave; Wang, Xiping; Calo, Seraphin; Pham, Tien
2013-05-01
In a coalition context, data fusion involves combining of soft (e.g., field reports, intelligence reports) and hard (e.g., acoustic, imagery) sensory data such that the resulting output is better than what it would have been if the data are taken individually. However, due to the lack of explicit semantics attached with such data, it is difficult to automatically disseminate and put the right contextual data in the hands of the decision makers. In order to understand the data, explicit meaning needs to be added by means of categorizing and/or classifying the data in relationship to each other from base reference sources. In this paper, we present a semantic framework that provides automated mechanisms to expose real-time raw data effectively by presenting appropriate information needed for a given situation so that an informed decision could be made effectively. The system utilizes controlled natural language capabilities provided by the ITA (International Technology Alliance) Controlled English (CE) toolkit to provide a human-friendly semantic representation of messages so that the messages can be directly processed in human/machine hybrid environments. The Real-time Semantic Enrichment (RTSE) service adds relevant contextual information to raw data streams from domain knowledge bases using declarative rules. The rules define how the added semantics and context information are derived and stored in a semantic knowledge base. The software framework exposes contextual information from a variety of hard and soft data sources in a fast, reliable manner so that an informed decision can be made using semantic queries in intelligent software systems.
CATEGORY-SPECIFIC SEMANTIC MEMORY: CONVERGING EVIDENCE FROM BOLD fMRI AND ALZHEIMER’S DISEASE
Grossman, Murray; Peelle, Jonathan E.; Smith, Edward E.; McMillan, Corey T.; Cook, Philip; Powers, John; Dreyfuss, Michael; Bonner, Michael F.; Richmond, Lauren; Boller, Ashley; Camp, Emily; Burkholder, Lisa
2012-01-01
Patients with Alzheimer’s disease have category-specific semantic memory difficulty for natural relative to manufactured objects. We assessed the basis for this deficit by asking healthy adults and patients to judge whether pairs of words share a feature (e.g. “banana:lemon – COLOR”). In an fMRI study, healthy adults showed gray matter (GM) activation of temporal-occipital cortex (TOC) where visual-perceptual features may be represented, and prefrontal cortex (PFC) which may contribute to feature selection. Tractography revealed dorsal and ventral stream white matter (WM) projections between PFC and TOC. Patients had greater difficulty with natural than manufactured objects. This was associated with greater overlap between diseased GM areas correlated with natural kinds in patients and fMRI activation in healthy adults for natural than manufactured artifacts, and the dorsal WM projection between PFC and TOC in patients correlated only with judgments of natural kinds. Patients thus remained dependent on the same neural network as controls during judgments of natural kinds, despite disease in these areas. For manufactured objects, patients’ judgments showed limited correlations with PFC and TOC GM areas activated by controls, and did not correlate with the PFC-TOC dorsal WM tract. Regions outside of the PFC–TOC network thus may help support patients’ judgments of manufactured objects. We conclude that a large-scale neural network for semantic memory implicates both feature knowledge representations in modality-specific association cortex and heteromodal regions important for accessing this knowledge, and that patients’ relative deficit for natural kinds is due in part to their dependence on this network despite disease in these areas. PMID:23220494
A multilingual gold-standard corpus for biomedical concept recognition: the Mantra GSC.
Kors, Jan A; Clematide, Simon; Akhondi, Saber A; van Mulligen, Erik M; Rebholz-Schuhmann, Dietrich
2015-09-01
To create a multilingual gold-standard corpus for biomedical concept recognition. We selected text units from different parallel corpora (Medline abstract titles, drug labels, biomedical patent claims) in English, French, German, Spanish, and Dutch. Three annotators per language independently annotated the biomedical concepts, based on a subset of the Unified Medical Language System and covering a wide range of semantic groups. To reduce the annotation workload, automatically generated preannotations were provided. Individual annotations were automatically harmonized and then adjudicated, and cross-language consistency checks were carried out to arrive at the final annotations. The number of final annotations was 5530. Inter-annotator agreement scores indicate good agreement (median F-score 0.79), and are similar to those between individual annotators and the gold standard. The automatically generated harmonized annotation set for each language performed equally well as the best annotator for that language. The use of automatic preannotations, harmonized annotations, and parallel corpora helped to keep the manual annotation efforts manageable. The inter-annotator agreement scores provide a reference standard for gauging the performance of automatic annotation techniques. To our knowledge, this is the first gold-standard corpus for biomedical concept recognition in languages other than English. Other distinguishing features are the wide variety of semantic groups that are being covered, and the diversity of text genres that were annotated. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association.
2014-07-15
Neurosci . 2013, doi:10.1155/2013/308176. 11. Goddard, C.; Wierzbicka, A. Semantics and cognition. Wiley Interdiscip. Rev.- Cogn . Sci. 2011, 2, 125–135...from the Journal of Neuroscience . Two article categories were selected for this analysis: 165 brief communications and 143 mini-reviews, randomly...valence, and arousal of two categories of recent articles from the Journal of Neuroscience : mini-reviews and brief communications (Figure 8). On average
Salerno, Michael; Taylor, Angela; Yang, Yang; Kuruvilla, Sujith; Ragosta, Michael; Meyer, Craig H; Kramer, Christopher M
2014-07-01
Adenosine stress cardiovascular magnetic resonance perfusion imaging can be limited by motion-induced dark-rim artifacts, which may be mistaken for true perfusion abnormalities. A high-resolution variable-density spiral pulse sequence with a novel density compensation strategy has been shown to reduce dark-rim artifacts in first-pass perfusion imaging. We aimed to assess the clinical performance of adenosine stress cardiovascular magnetic resonance using this new perfusion sequence to detect obstructive coronary artery disease. Cardiovascular magnetic resonance perfusion imaging was performed during adenosine stress (140 μg/kg per minute) and at rest on a Siemens 1.5-T Avanto scanner in 41 subjects with chest pain scheduled for coronary angiography. Perfusion images were acquired during injection of 0.1 mmol/kg Gadolinium-diethylenetriaminepentacetate at 3 short-axis locations using a saturation recovery interleaved variable-density spiral pulse sequence. Significant stenosis was defined as >50% by quantitative coronary angiography. Two blinded reviewers evaluated the perfusion images for the presence of adenosine-induced perfusion abnormalities and assessed image quality using a 5-point scale (1 [poor] to 5 [excellent]). The prevalence of obstructive coronary artery disease by quantitative coronary angiography was 68%. The average sensitivity, specificity, and accuracy were 89%, 85%, and 88%, respectively, with a positive predictive value and negative predictive value of 93% and 79%, respectively. The average image quality score was 4.4±0.7, with only 1 study with more than mild dark-rim artifacts. There was good inter-reader reliability with a κ statistic of 0.67. Spiral adenosine stress cardiovascular magnetic resonance results in high diagnostic accuracy for the detection of obstructive coronary artery disease with excellent image quality and minimal dark-rim artifacts. © 2014 American Heart Association, Inc.
Diffusion imaging quality control via entropy of principal direction distribution.
Farzinfar, Mahshid; Oguz, Ipek; Smith, Rachel G; Verde, Audrey R; Dietrich, Cheryl; Gupta, Aditya; Escolar, Maria L; Piven, Joseph; Pujol, Sonia; Vachet, Clement; Gouttard, Sylvain; Gerig, Guido; Dager, Stephen; McKinstry, Robert C; Paterson, Sarah; Evans, Alan C; Styner, Martin A
2013-11-15
Diffusion MR imaging has received increasing attention in the neuroimaging community, as it yields new insights into the microstructural organization of white matter that are not available with conventional MRI techniques. While the technology has enormous potential, diffusion MRI suffers from a unique and complex set of image quality problems, limiting the sensitivity of studies and reducing the accuracy of findings. Furthermore, the acquisition time for diffusion MRI is longer than conventional MRI due to the need for multiple acquisitions to obtain directionally encoded Diffusion Weighted Images (DWI). This leads to increased motion artifacts, reduced signal-to-noise ratio (SNR), and increased proneness to a wide variety of artifacts, including eddy-current and motion artifacts, "venetian blind" artifacts, as well as slice-wise and gradient-wise inconsistencies. Such artifacts mandate stringent Quality Control (QC) schemes in the processing of diffusion MRI data. Most existing QC procedures are conducted in the DWI domain and/or on a voxel level, but our own experiments show that these methods often do not fully detect and eliminate certain types of artifacts, often only visible when investigating groups of DWI's or a derived diffusion model, such as the most-employed diffusion tensor imaging (DTI). Here, we propose a novel regional QC measure in the DTI domain that employs the entropy of the regional distribution of the principal directions (PD). The PD entropy quantifies the scattering and spread of the principal diffusion directions and is invariant to the patient's position in the scanner. High entropy value indicates that the PDs are distributed relatively uniformly, while low entropy value indicates the presence of clusters in the PD distribution. The novel QC measure is intended to complement the existing set of QC procedures by detecting and correcting residual artifacts. Such residual artifacts cause directional bias in the measured PD and here called dominant direction artifacts. Experiments show that our automatic method can reliably detect and potentially correct such artifacts, especially the ones caused by the vibrations of the scanner table during the scan. The results further indicate the usefulness of this method for general quality assessment in DTI studies. Copyright © 2013 Elsevier Inc. All rights reserved.
Diffusion imaging quality control via entropy of principal direction distribution
Oguz, Ipek; Smith, Rachel G.; Verde, Audrey R.; Dietrich, Cheryl; Gupta, Aditya; Escolar, Maria L.; Piven, Joseph; Pujol, Sonia; Vachet, Clement; Gouttard, Sylvain; Gerig, Guido; Dager, Stephen; McKinstry, Robert C.; Paterson, Sarah; Evans, Alan C.; Styner, Martin A.
2013-01-01
Diffusion MR imaging has received increasing attention in the neuroimaging community, as it yields new insights into the microstructural organization of white matter that are not available with conventional MRI techniques. While the technology has enormous potential, diffusion MRI suffers from a unique and complex set of image quality problems, limiting the sensitivity of studies and reducing the accuracy of findings. Furthermore, the acquisition time for diffusion MRI is longer than conventional MRI due to the need for multiple acquisitions to obtain directionally encoded Diffusion Weighted Images (DWI). This leads to increased motion artifacts, reduced signal-to-noise ratio (SNR), and increased proneness to a wide variety of artifacts, including eddy-current and motion artifacts, “venetian blind” artifacts, as well as slice-wise and gradient-wise inconsistencies. Such artifacts mandate stringent Quality Control (QC) schemes in the processing of diffusion MRI data. Most existing QC procedures are conducted in the DWI domain and/or on a voxel level, but our own experiments show that these methods often do not fully detect and eliminate certain types of artifacts, often only visible when investigating groups of DWI's or a derived diffusion model, such as the most-employed diffusion tensor imaging (DTI). Here, we propose a novel regional QC measure in the DTI domain that employs the entropy of the regional distribution of the principal directions (PD). The PD entropy quantifies the scattering and spread of the principal diffusion directions and is invariant to the patient's position in the scanner. High entropy value indicates that the PDs are distributed relatively uniformly, while low entropy value indicates the presence of clusters in the PD distribution. The novel QC measure is intended to complement the existing set of QC procedures by detecting and correcting residual artifacts. Such residual artifacts cause directional bias in the measured PD and here called dominant direction artifacts. Experiments show that our automatic method can reliably detect and potentially correct such artifacts, especially the ones caused by the vibrations of the scanner table during the scan. The results further indicate the usefulness of this method for general quality assessment in DTI studies. PMID:23684874
Acquiring an understanding of design: evidence from children's insight problem solving.
Defeyter, Margaret Anne; German, Tim P
2003-09-01
The human ability to make tools and use them to solve problems may not be zoologically unique, but it is certainly extraordinary. Yet little is known about the conceptual machinery that makes humans so competent at making and using tools. Do adults and children have concepts specialized for understanding human-made artifacts? If so, are these concepts deployed in attempts to solve novel problems? Here we present new data, derived from problem-solving experiments, which support the following. (i) The structure of the child's concept of artifact function changes profoundly between ages 5 and 7. At age 5, the child's conceptual machinery defines the function of an artifact as any goal a user might have; by age 7, its function is defined by the artifact's typical or intended use. (ii) This conceptual shift has a striking effect on problem-solving performance, i.e. the child's concept of artifact function appears to be deployed in problem solving. (iii) This effect on problem solving is not caused by differences in the amount of knowledge that children have about the typical use of a particular tool; it is mediated by the structure of the child's artifact concept (which organizes and deploys the child's knowledge). In two studies, children between 5 and 7 years of age were matched for their knowledge of what a particular artifact "is for", and then given a problem that can only be solved if that tool is used for an atypical purpose. All children performed well in a baseline condition. But when they were primed by a demonstration of the artifact's typical function, 5-year-old children solved the problem much faster than 6-7-year-old children. Because all children knew what the tools were for, differences in knowledge alone cannot explain the results. We argue that the older children were slower to solve the problem when the typical function was primed because (i) their artifact concept plays a role in problem solving, and (ii) intended purpose is central to their concept of artifact function, but not to that of the younger children.
Semantic processing of unattended parafoveal words.
Di Pace, E; Longoni, A M; Zoccolotti, P
1991-08-01
The influence that a context word presented either foveally or parafoveally, may exert on the processing of a subsequent target word was studied in a semantic decision task. Fourteen subjects participated in the experiment. They were presented with word-nonword pairs (prime). One member of the pair (which the subjects had to attend to) appeared centrally, the other parafoveally. The prime was followed by a target at two inter-stimulus intervals (ISI; 200 and 2000 msec). The word stimulus of the pair could be semantically related or unrelated to the target. The subjects' task was to classify the target as animal or not animal by pressing one of two buttons as quickly as possible. When the target word was semantically associated with the foveal (attended) word the reaction times were faster for both ISIs; when it was associated with the parafoveal (unattended) word in the prime pair, there were facilitatory effects only in the short ISI condition. A second experiment was run in order to evaluate the possibility that the obtained results were due to identification of the parafoveal stimulus. The same prime-target pairs of experiment 1 (without the target stimuli) were used. The prime-target pairs were presented to fourteen subjects who were requested to name the foveal (attended) stimulus and subsequently, if possible, the parafoveal (unattended) one. Even in this condition, percentage of identification of the unattended word was only 15%, suggesting that previous findings were not due to identification of unattended stimuli. Results are discussed in relation to Posner and Snyder's (1975) dual coding theory.
Pulvermüller, Friedemann; Shtyrov, Yury; Hauk, Olaf
2009-08-01
How long does it take the human mind to grasp the idea when hearing or reading a sentence? Neurophysiological methods looking directly at the time course of brain activity indexes of comprehension are critical for finding the answer to this question. As the dominant cognitive approaches, models of serial/cascaded and parallel processing, make conflicting predictions on the time course of psycholinguistic information access, they can be tested using neurophysiological brain activation recorded in MEG and EEG experiments. Seriality and cascading of lexical, semantic and syntactic processes receives support from late (latency approximately 1/2s) sequential neurophysiological responses, especially N400 and P600. However, parallelism is substantiated by early near-simultaneous brain indexes of a range of psycholinguistic processes, up to the level of semantic access and context integration, emerging already 100-250ms after critical stimulus information is present. Crucially, however, there are reliable latency differences of 20-50ms between early cortical area activations reflecting lexical, semantic and syntactic processes, which are left unexplained by current serial and parallel brain models of language. We here offer a mechanistic model grounded in cortical nerve cell circuits that builds upon neuroanatomical and neurophysiological knowledge and explains both near-simultaneous activations and fine-grained delays. A key concept is that of discrete distributed cortical circuits with specific inter-area topographies. The full activation, or ignition, of specifically distributed binding circuits explains the near-simultaneity of early neurophysiological indexes of lexical, syntactic and semantic processing. Activity spreading within circuits determined by between-area conduction delays accounts for comprehension-related regional activation differences in the millisecond range.
Acoustic monitoring of first responder's physiology for health and performance surveillance
NASA Astrophysics Data System (ADS)
Scanlon, Michael V.
2002-08-01
Acoustic sensors have been used to monitor firefighter and soldier physiology to assess health and performance. The Army Research Laboratory has developed a unique body-contacting acoustic sensor that can monitor the health and performance of firefighters and soldiers while they are doing their mission. A gel-coupled sensor has acoustic impedance properties similar to the skin that facilitate the transmission of body sounds into the sensor pad, yet significantly repel ambient airborne noises due to an impedance mismatch. This technology can monitor heartbeats, breaths, blood pressure, motion, voice, and other indicators that can provide vital feedback to the medics and unit commanders. Diverse physiological parameters can be continuously monitored with acoustic sensors and transmitted for remote surveillance of personnel status. Body-worn acoustic sensors located at the neck, breathing mask, and wrist do an excellent job at detecting heartbeats and activity. However, they have difficulty extracting physiology during rigorous exercise or movements due to the motion artifacts sensed. Rigorous activity often indicates that the person is healthy by virtue of being active, and injury often causes the subject to become less active or incapacitated making the detection of physiology easier. One important measure of performance, heart rate variability, is the measure of beat-to-beat timing fluctuations derived from the interval between two adjacent beats. The Lomb periodogram is optimized for non-uniformly sampled data, and can be applied to non-stationary acoustic heart rate features (such as 1st and 2nd heart sounds) to derive heart rate variability and help eliminate errors created by motion artifacts. Simple peak-detection above or below a certain threshold or waveform derivative parameters can produce the timing and amplitude features necessary for the Lomb periodogram and cross-correlation techniques. High-amplitude motion artifacts may contribute to a different frequency or baseline noise due to the timing differences between the noise artifacts and heartbeat features. Data from a firefighter experiment is presented.
Effects of metric hierarchy and rhyme predictability on word duration in The Cat in the Hat.
Breen, Mara
2018-05-01
Word durations convey many types of linguistic information, including intrinsic lexical features like length and frequency and contextual features like syntactic and semantic structure. The current study was designed to investigate whether hierarchical metric structure and rhyme predictability account for durational variation over and above other features in productions of a rhyming, metrically-regular children's book: The Cat in the Hat (Dr. Seuss, 1957). One-syllable word durations and inter-onset intervals were modeled as functions of segment number, lexical frequency, word class, syntactic structure, repetition, and font emphasis. Consistent with prior work, factors predicting longer word durations and inter-onset intervals included more phonemes, lower frequency, first mention, alignment with a syntactic boundary, and capitalization. A model parameter corresponding to metric grid height improved model fit of word durations and inter-onset intervals. Specifically, speakers realized five levels of metric hierarchy with inter-onset intervals such that interval duration increased linearly with increased height in the metric hierarchy. Conversely, speakers realized only three levels of metric hierarchy with word duration, demonstrating that they shortened the highly predictable rhyme resolutions. These results further understanding of the factors that affect spoken word duration, and demonstrate the myriad cues that children receive about linguistic structure from nursery rhymes. Copyright © 2018 Elsevier B.V. All rights reserved.
Cattelani, R; Corsini, D; Posteraro, L; Agosti, M; Saccavini, M
2009-12-01
The assessment of major obstacles to community integration which may result from an acquired brain injury (ABI) is needed for rational planning and effective management of ABI patients' social adjustment. Currently, such a generally acceptable measure is not available for the Italian population. This paper reports the translation process, the internal consistency, and the inter-rater reliability data for the Italian version of the Mayo-Portland Adaptability Inventory-4 (MPAI-4), a useful measure with highly developed and well documented psychometric properties. The MPAI-4 is specifically designed to assess socially relevant aspects of physical status and cognitive-behavioural competence following ABI. It is a 29-item inventory which is divided into three subdomains (Abilities, Adjustment, and Participation indices) covering a reasonably representative range Twenty ABI patients with at least one-year discharge from the rehabilitation facilities were submitted to the Italian MPAI-4. They were independently rated by two different rehabilitation professionals and a family member/significant other serving as informant (SO). Internal consistency was assessed by calculating the Cronbach's alpha values. Inter-rater agreement for individual items was statistically examined by determining the interclass correlation coefficient (ICC). In addition to the 8% of perfectly correspondent sentences, a clear prevalence (75.5%) of minor semantic variations and formal variations with no semantic value at the sentence-to-sentence matching was found. Full-scale Cronbach's alpha was 0.951 and 0.947 for the two professionals (rater #1 and rater #2, respectively), and was 0.957 for the family member serving as informant (rater #3). Full-Scale ICC (2.1) between professionals and SOs was 0.804 (CI=95%; lower-upper bound=0.688-0.901). The Italian MPAI-4 shares many psychometric features with the original English version, demonstrates both good internal consistency and good inter-rater reliability. The MPAI-4 confirms to be suitable for research applications in postacute settings as an efficient, broad and inclusive outcome measure for adult subjects with ABI.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Elzibak, A; Loblaw, A; Morton, G
Purpose: To investigate the usefulness of metal artifact reduction in CT images of patients with bilateral hip prostheses (BHP) for contouring the prostate and determine if the inclusion of MR images provides additional benefits. Methods: Five patients with BHP were CT scanned using our clinical protocol (140kV, 300mAs, 3mm slices, 1.5mm increment, Philips Medical Systems, OH). Images were reconstructed with the orthopaedic metal artifact reduction (O-MAR) algorithm. MRI scanning was then performed (1.5T, GE Healthcare, WI) with a flat table-top (T{sub 2}-weighted, inherent body coil, FRFSE, 3mm slices with 0mm gap). All images were transferred to Pinnacle (Version 9.2, Philipsmore » Medical Systems). For each patient, two data sets were produced: one containing the O-MAR-corrected CT images and another containing fused MRI and O-MAR-corrected CT images. Four genito-urinary radiation oncologists contoured the prostate of each patient on the O-MAR-corrected CT data. Two weeks later, they contoured the prostate on the fused data set, blinded to all other contours. During each contouring session, the oncologists reported their confidence in the contours (1=very confident, 3=not confident) and the contouring difficulty that they experienced (1=really easy, 4=very challenging). Prostate volumes were computed from the contours and the conformity index was used to evaluate inter-observer variability. Results: Larger prostate volumes were found on the O-MAR-corrected CT set than on the fused set (p< 0.05, median=36.9cm{sup 3} vs. 26.63 cm{sup 3}). No significant differences were noted in the inter-observer variability between the two data sets (p=0.3). Contouring difficulty decreased with the addition of MRI (p<0.05) while the radiation oncologists reported more confidence in their contours when MRI was fused with the O-MAR-corrected CT data (p<0.05). Conclusion: This preliminary work demonstrated that, while O-MAR correction to CT images improves visualization of anatomy, the addition of MRI enhanced the oncologists’ confidence in contouring the prostate in patients with BHP.« less
Focal temporal pole atrophy and network degeneration in semantic variant primary progressive aphasia
Collins, Jessica A; Montal, Victor; Hochberg, Daisy; Quimby, Megan; Mandelli, Maria Luisa; Makris, Nikos; Seeley, William W; Gorno-Tempini, Maria Luisa; Dickerson, Bradford C
2017-01-01
Abstract A wealth of neuroimaging research has associated semantic variant primary progressive aphasia with distributed cortical atrophy that is most prominent in the left anterior temporal cortex; however, there is little consensus regarding which region within the anterior temporal cortex is most prominently damaged, which may indicate the putative origin of neurodegeneration. In this study, we localized the most prominent and consistent region of atrophy in semantic variant primary progressive aphasia using cortical thickness analysis in two independent patient samples (n = 16 and 28, respectively) relative to age-matched controls (n = 30). Across both samples the point of maximal atrophy was located in the same region of the left temporal pole. This same region was the point of maximal atrophy in 100% of individual patients in both semantic variant primary progressive aphasia samples. Using resting state functional connectivity in healthy young adults (n = 89), we showed that the seed region derived from the semantic variant primary progressive aphasia analysis was strongly connected with a large-scale network that closely resembled the distributed atrophy pattern in semantic variant primary progressive aphasia. In both patient samples, the magnitude of atrophy within a brain region was predicted by that region’s strength of functional connectivity to the temporopolar seed region in healthy adults. These findings suggest that cortical atrophy in semantic variant primary progressive aphasia may follow connectional pathways within a large-scale network that converges on the temporal pole. PMID:28040670
Semantic ambiguity effects on traditional Chinese character naming: A corpus-based approach.
Chang, Ya-Ning; Lee, Chia-Ying
2017-11-09
Words are considered semantically ambiguous if they have more than one meaning and can be used in multiple contexts. A number of recent studies have provided objective ambiguity measures by using a corpus-based approach and have demonstrated ambiguity advantages in both naming and lexical decision tasks. Although the predictive power of objective ambiguity measures has been examined in several alphabetic language systems, the effects in logographic languages remain unclear. Moreover, most ambiguity measures do not explicitly address how the various contexts associated with a given word relate to each other. To explore these issues, we computed the contextual diversity (Adelman, Brown, & Quesada, Psychological Science, 17; 814-823, 2006) and semantic ambiguity (Hoffman, Lambon Ralph, & Rogers, Behavior Research Methods, 45; 718-730, 2013) of traditional Chinese single-character words based on the Academia Sinica Balanced Corpus, where contextual diversity was used to evaluate the present semantic space. We then derived a novel ambiguity measure, namely semantic variability, by computing the distance properties of the distinct clusters grouped by the contexts that contained a given word. We demonstrated that semantic variability was superior to semantic diversity in accounting for the variance in naming response times, suggesting that considering the substructure of the various contexts associated with a given word can provide a relatively fine scale of ambiguity information for a word. All of the context and ambiguity measures for 2,418 Chinese single-character words are provided as supplementary materials.
Artifact suppression and analysis of brain activities with electroencephalography signals.
Rashed-Al-Mahfuz, Md; Islam, Md Rabiul; Hirose, Keikichi; Molla, Md Khademul Islam
2013-06-05
Brain-computer interface is a communication system that connects the brain with computer (or other devices) but is not dependent on the normal output of the brain (i.e., peripheral nerve and muscle). Electro-oculogram is a dominant artifact which has a significant negative influence on further analysis of real electroencephalography data. This paper presented a data adaptive technique for artifact suppression and brain wave extraction from electroencephalography signals to detect regional brain activities. Empirical mode decomposition based adaptive thresholding approach was employed here to suppress the electro-oculogram artifact. Fractional Gaussian noise was used to determine the threshold level derived from the analysis data without any training. The purified electroencephalography signal was composed of the brain waves also called rhythmic components which represent the brain activities. The rhythmic components were extracted from each electroencephalography channel using adaptive wiener filter with the original scale. The regional brain activities were mapped on the basis of the spatial distribution of rhythmic components, and the results showed that different regions of the brain are activated in response to different stimuli. This research analyzed the activities of a single rhythmic component, alpha with respect to different motor imaginations. The experimental results showed that the proposed method is very efficient in artifact suppression and identifying individual motor imagery based on the activities of alpha component.
Eddy current compensation for delta relaxation enhanced MR by dynamic reference phase modulation.
Hoelscher, Uvo Christoph; Jakob, Peter M
2013-04-01
Eddy current compensation by dynamic reference phase modulation (eDREAM) is a compensation method for eddy current fields induced by B 0 field-cycling which occur in delta relaxation enhanced MR (dreMR) imaging. The presented method is based on a dynamic frequency adjustment and prevents eddy current related artifacts. It is easy to implement and can be completely realized in software for any imaging sequence. In this paper, the theory of eDREAM is derived and two applications are demonstrated. The theory describes how to model the behavior of the eddy currents and how to implement the compensation. Phantom and in vivo measurements are carried out and demonstrate the benefits of eDREAM. A comparison of images acquired with and without eDREAM shows a significant improvement in dreMR image quality. Images without eDREAM suffer from severe artifacts and do not allow proper interpretation while images with eDREAM are artifact free. In vivo experiments demonstrate that dreMR imaging without eDREAM is not feasible as artifacts completely change the image contrast. eDREAM is a flexible eddy current compensation for dreMR. It is capable of completely removing the influence of eddy currents such that the dreMR images do not suffer from artifacts.
A step-wise approach for analysis of the mouse embryonic heart using 17.6 Tesla MRI
Gabbay-Benziv, Rinat; Reece, E. Albert; Wang, Fang; Bar-Shir, Amnon; Harman, Chris; Turan, Ozhan M.; Yang, Peixin; Turan, Sifa
2018-01-01
Background The mouse embryo is ideal for studying human cardiac development. However, laboratory discoveries do not easily translate into clinical findings partially because of histological diagnostic techniques that induce artifacts and lack standardization. Aim To present a step-wise approach using 17.6 T MRI, for evaluation of mice embryonic heart and accurate identification of congenital heart defects. Subjects 17.5-embryonic days embryos from low-risk (non-diabetic) and high-risk (diabetic) model dams. Study design Embryos were imaged using 17.6 Tesla MRI. Three-dimensional volumes were analyzed using ImageJ software. Outcome measures Embryonic hearts were evaluated utilizing anatomic landmarks to locate the four-chamber view, the left- and right-outflow tracts, and the arrangement of the great arteries. Inter- and intra-observer agreement were calculated using kappa scores by comparing two researchers’ evaluations independently analyzing all hearts, blinded to the model, on three different, timed occasions. Each evaluated 16 imaging volumes of 16 embryos: 4 embryos from normal dams, and 12 embryos from diabetic dams. Results Inter-observer agreement and reproducibility were 0.779 (95% CI 0.653–0.905) and 0.763 (95% CI 0.605–0.921), respectively. Embryonic hearts were structurally normal in 4/4 and 7/12 embryos from normal and diabetic dams, respectively. Five embryos from diabetic dams had defects: ventricular septal defects (n = 2), transposition of great arteries (n = 2) and Tetralogy of Fallot (n = 1). Both researchers identified all cardiac lesions. Conclusion A step-wise approach for analysis of MRI-derived 3D imaging provides reproducible detailed cardiac evaluation of normal and abnormal mice embryonic hearts. This approach can accurately reveal cardiac structure and, thus, increases the yield of animal model in congenital heart defect research. PMID:27569369
NASA Astrophysics Data System (ADS)
Troy, S.; Aharon, P.; Lambert, W. J.
2012-12-01
El Niño-Southern Oscillation's (ENSO) dominant control over the present global climate and its unpredictable response to a global warming makes the study of paleo-ENSO important. So far corals, spanning the Tropical Pacific Ocean, are the most commonly used geological archives of paleo-ENSO. This is because corals typically exhibit high growth rates (>1 cm/yr), and reproduce reliably surface water temperatures at sub-annual resolution. However there are limitations to coral archives because their time span is relatively brief (in the order of centuries), thus far making a long and continuous ENSO record difficult to achieve. On the other hand stalagmites from island settings can offer long and continuous records of ENSO-driven rainfall. Niue Island caves offer an unusual opportunity to investigate ENSO-driven paleo-rainfall because the island is isolated from other large land masses, making it untainted by continental climate artifacts, and its geographical location is within the Tropical Pacific "rain pool" (South Pacific Convergence Zone; SPCZ) that makes the rainfall variability particularly sensitive to the ENSO phase switches. We present here a δ18O and δ13C time series from a stalagmite sampled on Niue Island (19°00' S, 169°50' W) that exhibits exceptionally high growth rates (~1.2 mm/yr) thus affording a resolution comparable to corals but for much longer time spans. A precise chronology, dating back to several millennia, was achieved by U/Th dating of the stalagmite. The stalagmite was sampled using a Computer Automated Mill (CAM) at 300 μm increments in order to receive sub-annual resolution (every 3 months) and calcite powders of 50-100 μg weight were analyzed for δ18O and δ13C using a Continuous Flow Isotope Ratio Mass Spectrometer (CF-IRMS). The isotope time series contains variable shifts at seasonal, inter-annual, and inter-decadal periodicities. The δ13C and δ18O yield ranges of -3.0 to -13.0 (‰ VPDB) and -3.2 to -6.2 (‰ VPDB), respectively. The presentation will describe the factors impacting the seasonal, inter-annual and inter-decadal variability in a highly resolved ENSO record.
Removal of BCG artifacts using a non-Kirchhoffian overcomplete representation.
Dyrholm, Mads; Goldman, Robin; Sajda, Paul; Brown, Truman R
2009-02-01
We present a nonlinear unmixing approach for extracting the ballistocardiogram (BCG) from EEG recorded in an MR scanner during simultaneous acquisition of functional MRI (fMRI). First, an overcomplete basis is identified in the EEG based on a custom multipath EEG electrode cap. Next, the overcomplete basis is used to infer non-Kirchhoffian latent variables that are not consistent with a conservative electric field. Neural activity is strictly Kirchhoffian while the BCG artifact is not, and the representation can hence be used to remove the artifacts from the data in a way that does not attenuate the neural signals needed for optimal single-trial classification performance. We compare our method to more standard methods for BCG removal, namely independent component analysis and optimal basis sets, by looking at single-trial classification performance for an auditory oddball experiment. We show that our overcomplete representation method for removing BCG artifacts results in better single-trial classification performance compared to the conventional approaches, indicating that the derived neural activity in this representation retains the complex information in the trial-to-trial variability.
Comtois, Gary; Mendelson, Yitzhak; Ramuka, Piyush
2007-01-01
Wearable physiological monitoring using a pulse oximeter would enable field medics to monitor multiple injuries simultaneously, thereby prioritizing medical intervention when resources are limited. However, a primary factor limiting the accuracy of pulse oximetry is poor signal-to-noise ratio since photoplethysmographic (PPG) signals, from which arterial oxygen saturation (SpO2) and heart rate (HR) measurements are derived, are compromised by movement artifacts. This study was undertaken to quantify SpO2 and HR errors induced by certain motion artifacts utilizing accelerometry-based adaptive noise cancellation (ANC). Since the fingers are generally more vulnerable to motion artifacts, measurements were performed using a custom forehead-mounted wearable pulse oximeter developed for real-time remote physiological monitoring and triage applications. This study revealed that processing motion-corrupted PPG signals by least mean squares (LMS) and recursive least squares (RLS) algorithms can be effective to reduce SpO2 and HR errors during jogging, but the degree of improvement depends on filter order. Although both algorithms produced similar improvements, implementing the adaptive LMS algorithm is advantageous since it requires significantly less operations.
Priming semantic concepts affects the dynamics of aesthetic appreciation.
Faerber, Stella J; Leder, Helmut; Gerger, Gernot; Carbon, Claus-Christian
2010-10-01
Aesthetic appreciation (AA) plays an important role for purchase decisions, for the appreciation of art and even for the selection of potential mates. It is known that AA is highly reliable in single assessments, but over longer periods of time dynamic changes of AA may occur. We measured AA as a construct derived from the literature through attractiveness, arousal, interestingness, valence, boredom and innovativeness. By means of the semantic network theory we investigated how the priming of AA-relevant semantic concepts impacts the dynamics of AA of unfamiliar product designs (car interiors) that are known to be susceptible to triggering such effects. When participants were primed for innovativeness, strong dynamics were observed, especially when the priming involved additional AA-relevant dimensions. This underlines the relevance of priming of specific semantic networks not only for the cognitive processing of visual material in terms of selective perception or specific representation, but also for the affective-cognitive processing in terms of the dynamics of aesthetic processing. Copyright © 2010 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Lange, Rense
2015-02-01
An extension of concurrent validity is proposed that uses qualitative data for the purpose of validating quantitative measures. The approach relies on Latent Semantic Analysis (LSA) which places verbal (written) statements in a high dimensional semantic space. Using data from a medical / psychiatric domain as a case study - Near Death Experiences, or NDE - we established concurrent validity by connecting NDErs qualitative (written) experiential accounts with their locations on a Rasch scalable measure of NDE intensity. Concurrent validity received strong empirical support since the variance in the Rasch measures could be predicted reliably from the coordinates of their accounts in the LSA derived semantic space (R2 = 0.33). These coordinates also predicted NDErs age with considerable precision (R2 = 0.25). Both estimates are probably artificially low due to the small available data samples (n = 588). It appears that Rasch scalability of NDE intensity is a prerequisite for these findings, as each intensity level is associated (at least probabilistically) with a well- defined pattern of item endorsements.
Oppenheim, Gary M; Dell, Gary S; Schwartz, Myrna F
2010-02-01
Naming a picture of a dog primes the subsequent naming of a picture of a dog (repetition priming) and interferes with the subsequent naming of a picture of a cat (semantic interference). Behavioral studies suggest that these effects derive from persistent changes in the way that words are activated and selected for production, and some have claimed that the findings are only understandable by positing a competitive mechanism for lexical selection. We present a simple model of lexical retrieval in speech production that applies error-driven learning to its lexical activation network. This model naturally produces repetition priming and semantic interference effects. It predicts the major findings from several published experiments, demonstrating that these effects may arise from incremental learning. Furthermore, analysis of the model suggests that competition during lexical selection is not necessary for semantic interference if the learning process is itself competitive. Copyright 2009 Elsevier B.V. All rights reserved.
Cohen, Trevor; Blatter, Brett; Patel, Vimla
2008-01-01
Cognitive studies reveal that less-than-expert clinicians are less able to recognize meaningful patterns of data in clinical narratives. Accordingly, psychiatric residents early in training fail to attend to information that is relevant to diagnosis and the assessment of dangerousness. This manuscript presents cognitively motivated methodology for the simulation of expert ability to organize relevant findings supporting intermediate diagnostic hypotheses. Latent Semantic Analysis is used to generate a semantic space from which meaningful associations between psychiatric terms are derived. Diagnostically meaningful clusters are modeled as geometric structures within this space and compared to elements of psychiatric narrative text using semantic distance measures. A learning algorithm is defined that alters components of these geometric structures in response to labeled training data. Extraction and classification of relevant text segments is evaluated against expert annotation, with system-rater agreement approximating rater-rater agreement. A range of biomedical informatics applications for these methods are suggested. PMID:18455483
Chinese Passives: Transformational or Lexical?
ERIC Educational Resources Information Center
Zhang, Jiuwu; Wen, Xiaohong
Analysis of Chinese passive constructions indicates two types. The first is a verbal or syntactic passive because it is derived through a transformational rule. The second is a lexical passive that has certain properties in common with the predicate adjectives in both Chinese and English and is derived through the semantic function and in lexical…
Cohen, Trevor; Blatter, Brett; Patel, Vimla
2005-01-01
Certain applications require computer systems to approximate intended human meaning. This is achievable in constrained domains with a finite number of concepts. Areas such as psychiatry, however, draw on concepts from the world-at-large. A knowledge structure with broad scope is required to comprehend such domains. Latent Semantic Analysis (LSA) is an unsupervised corpus-based statistical method that derives quantitative estimates of the similarity between words and documents from their contextual usage statistics. The aim of this research was to evaluate the ability of LSA to derive meaningful associations between concepts relevant to the assessment of dangerousness in psychiatry. An expert reference model of dangerousness was used to guide the construction of a relevant corpus. Derived associations between words in the corpus were evaluated qualitatively. A similarity-based scoring function was used to assign dangerousness categories to discharge summaries. LSA was shown to derive intuitive relationships between concepts and correlated significantly better than random with human categorization of psychiatric discharge summaries according to dangerousness. The use of LSA to derive a simulated knowledge structure can extend the scope of computer systems beyond the boundaries of constrained conceptual domains. PMID:16779020
Order recall in verbal short-term memory: The role of semantic networks.
Poirier, Marie; Saint-Aubin, Jean; Mair, Ali; Tehan, Gerry; Tolan, Anne
2015-04-01
In their recent article, Acheson, MacDonald, and Postle (Journal of Experimental Psychology: Learning, Memory, and Cognition 37:44-59, 2011) made an important but controversial suggestion: They hypothesized that (a) semantic information has an effect on order information in short-term memory (STM) and (b) order recall in STM is based on the level of activation of items within the relevant lexico-semantic long-term memory (LTM) network. However, verbal STM research has typically led to the conclusion that factors such as semantic category have a large effect on the number of correctly recalled items, but little or no impact on order recall (Poirier & Saint-Aubin, Quarterly Journal of Experimental Psychology 48A:384-404, 1995; Saint-Aubin, Ouellette, & Poirier, Psychonomic Bulletin & Review 12:171-177, 2005; Tse, Memory 17:874-891, 2009). Moreover, most formal models of short-term order memory currently suggest a separate mechanism for order coding-that is, one that is separate from item representation and not associated with LTM lexico-semantic networks. Both of the experiments reported here tested the predictions that we derived from Acheson et al. The findings show that, as predicted, manipulations aiming to affect the activation of item representations significantly impacted order memory.
Xu, Hua; AbdelRahman, Samir; Lu, Yanxin; Denny, Joshua C.; Doan, Son
2011-01-01
Semantic-based sublanguage grammars have been shown to be an efficient method for medical language processing. However, given the complexity of the medical domain, parsers using such grammars inevitably encounter ambiguous sentences, which could be interpreted by different groups of production rules and consequently result in two or more parse trees. One possible solution, which has not been extensively explored previously, is to augment productions in medical sublanguage grammars with probabilities to resolve the ambiguity. In this study, we associated probabilities with production rules in a semantic-based grammar for medication findings and evaluated its performance on reducing parsing ambiguity. Using the existing data set from 2009 i2b2 NLP (Natural Language Processing) challenge for medication extraction, we developed a semantic-based CFG (Context Free Grammar) for parsing medication sentences and manually created a Treebank of 4,564 medication sentences from discharge summaries. Using the Treebank, we derived a semantic-based PCFG (probabilistic Context Free Grammar) for parsing medication sentences. Our evaluation using a 10-fold cross validation showed that the PCFG parser dramatically improved parsing performance when compared to the CFG parser. PMID:21856440
Sauerschnig, Claudia; Doppler, Maria
2017-01-01
Many metabolomics studies use mixtures of (acidified) methanol and water for sample extraction. In the present study, we investigated if the extraction with methanol can result in artifacts. To this end, wheat leaves were extracted with mixtures of native and deuterium-labeled methanol and water, with or without 0.1% formic acid. Subsequently, the extracts were analyzed immediately or after storage at 10 °C, −20 °C or −80 °C with an HPLC-HESI-QExactive HF-Orbitrap instrument. Our results showed that 88 (8%) of the >1100 detected compounds were derived from the reaction with methanol and either formed during sample extraction or short-term storage. Artifacts were found for various substance classes such as flavonoids, carotenoids, tetrapyrrols, fatty acids and other carboxylic acids that are typically investigated in metabolomics studies. 58 of 88 artifacts were common between the two tested extraction variants. Remarkably, 34 of 73 (acidified extraction solvent) and 33 of 73 (non-acidified extraction solvent) artifacts were formed de novo as none of these meth(ox)ylated metabolites were found after extraction of native leaf samples with CD3OH/H2O. Moreover, sample extracts stored at 10 °C for several days, as can typically be the case during longer measurement sequences, led to an increase in both the number and abundance of methylated artifacts. In contrast, frozen sample extracts were relatively stable during a storage period of one week. Our study shows that caution has to be exercised if methanol is used as the extraction solvent as the detected metabolites might be artifacts rather than natural constituents of the biological system. In addition, we recommend storing sample extracts in deep freezers immediately after extraction until measurement. PMID:29271872
A Common Mechanism in Verb and Noun Naming Deficits in Alzheimer’s Patients
Almor, Amit; Aronoff, Justin M.; MacDonald, Maryellen C.; Gonnerman, Laura M.; Kempler, Daniel; Hintiryan, Houri; Hayes, UnJa L.; Arunachalam, Sudha; Andersen, Elaine S.
2009-01-01
We tested the ability of Alzheimer’s patients and elderly controls to name living and non-living nouns, and manner and instrument verbs. Patient’s error patterns and relative performance with different categories showed evidence of graceful degradation for both nouns and verbs, with particular domain specific impairments for living nouns and instrument verbs. Our results support feature-based, semantic representations for nouns and verbs and support the role of inter-correlated features in noun impairment, and the role of noun knowledge in instrument verb impairment. PMID:19699513
Kobayashi, Shinji; Kume, Naoto; Yoshihara, Hiroyuki
2015-01-01
In 2001, we developed an EHR system for regional healthcare information inter-exchange and to provide individual patient data to patients. This system was adopted in three regions in Japan. We also developed a Medical Markup Language (MML) standard for inter- and intra-hospital communications. The system was built on a legacy platform, however, and had not been appropriately maintained or updated to meet clinical requirements. To improve future maintenance costs, we reconstructed the EHR system using archetype technology on the Ruby on Rails platform, and generated MML equivalent forms from archetypes. The system was deployed as a cloud-based system for preliminary use as a regional EHR. The system now has the capability to catch up with new requirements, maintaining semantic interoperability with archetype technology. It is also more flexible than the legacy EHR system.
Jointly learning word embeddings using a corpus and a knowledge base
Bollegala, Danushka; Maehara, Takanori; Kawarabayashi, Ken-ichi
2018-01-01
Methods for representing the meaning of words in vector spaces purely using the information distributed in text corpora have proved to be very valuable in various text mining and natural language processing (NLP) tasks. However, these methods still disregard the valuable semantic relational structure between words in co-occurring contexts. These beneficial semantic relational structures are contained in manually-created knowledge bases (KBs) such as ontologies and semantic lexicons, where the meanings of words are represented by defining the various relationships that exist among those words. We combine the knowledge in both a corpus and a KB to learn better word embeddings. Specifically, we propose a joint word representation learning method that uses the knowledge in the KBs, and simultaneously predicts the co-occurrences of two words in a corpus context. In particular, we use the corpus to define our objective function subject to the relational constrains derived from the KB. We further utilise the corpus co-occurrence statistics to propose two novel approaches, Nearest Neighbour Expansion (NNE) and Hedged Nearest Neighbour Expansion (HNE), that dynamically expand the KB and therefore derive more constraints that guide the optimisation process. Our experimental results over a wide-range of benchmark tasks demonstrate that the proposed method statistically significantly improves the accuracy of the word embeddings learnt. It outperforms a corpus-only baseline and reports an improvement of a number of previously proposed methods that incorporate corpora and KBs in both semantic similarity prediction and word analogy detection tasks. PMID:29529052
Target-Oriented High-Resolution SAR Image Formation via Semantic Information Guided Regularizations
NASA Astrophysics Data System (ADS)
Hou, Biao; Wen, Zaidao; Jiao, Licheng; Wu, Qian
2018-04-01
Sparsity-regularized synthetic aperture radar (SAR) imaging framework has shown its remarkable performance to generate a feature enhanced high resolution image, in which a sparsity-inducing regularizer is involved by exploiting the sparsity priors of some visual features in the underlying image. However, since the simple prior of low level features are insufficient to describe different semantic contents in the image, this type of regularizer will be incapable of distinguishing between the target of interest and unconcerned background clutters. As a consequence, the features belonging to the target and clutters are simultaneously affected in the generated image without concerning their underlying semantic labels. To address this problem, we propose a novel semantic information guided framework for target oriented SAR image formation, which aims at enhancing the interested target scatters while suppressing the background clutters. Firstly, we develop a new semantics-specific regularizer for image formation by exploiting the statistical properties of different semantic categories in a target scene SAR image. In order to infer the semantic label for each pixel in an unsupervised way, we moreover induce a novel high-level prior-driven regularizer and some semantic causal rules from the prior knowledge. Finally, our regularized framework for image formation is further derived as a simple iteratively reweighted $\\ell_1$ minimization problem which can be conveniently solved by many off-the-shelf solvers. Experimental results demonstrate the effectiveness and superiority of our framework for SAR image formation in terms of target enhancement and clutters suppression, compared with the state of the arts. Additionally, the proposed framework opens a new direction of devoting some machine learning strategies to image formation, which can benefit the subsequent decision making tasks.
Actively learning human gaze shifting paths for semantics-aware photo cropping.
Zhang, Luming; Gao, Yue; Ji, Rongrong; Xia, Yingjie; Dai, Qionghai; Li, Xuelong
2014-05-01
Photo cropping is a widely used tool in printing industry, photography, and cinematography. Conventional cropping models suffer from the following three challenges. First, the deemphasized role of semantic contents that are many times more important than low-level features in photo aesthetics. Second, the absence of a sequential ordering in the existing models. In contrast, humans look at semantically important regions sequentially when viewing a photo. Third, the difficulty of leveraging inputs from multiple users. Experience from multiple users is particularly critical in cropping as photo assessment is quite a subjective task. To address these challenges, this paper proposes semantics-aware photo cropping, which crops a photo by simulating the process of humans sequentially perceiving semantically important regions of a photo. We first project the local features (graphlets in this paper) onto the semantic space, which is constructed based on the category information of the training photos. An efficient learning algorithm is then derived to sequentially select semantically representative graphlets of a photo, and the selecting process can be interpreted by a path, which simulates humans actively perceiving semantics in a photo. Furthermore, we learn a prior distribution of such active graphlet paths from training photos that are marked as aesthetically pleasing by multiple users. The learned priors enforce the corresponding active graphlet path of a test photo to be maximally similar to those from the training photos. Experimental results show that: 1) the active graphlet path accurately predicts human gaze shifting, and thus is more indicative for photo aesthetics than conventional saliency maps and 2) the cropped photos produced by our approach outperform its competitors in both qualitative and quantitative comparisons.
AVIRIS data calibration information: Oquirrh and East Tintic mountains, Utah
Rockwell, Barnaby W.; Clark, Roger N.; Livo, K. Eric; McDougal, Robert R.; Kokaly, Raymond F.
2002-01-01
The information contained herein pertains to the original reflectance calibration derived solely from the Saltair beach site on the shores of Great Salt Lake. The reflectance data derived from this calibration becomes markedly affected by residual absorptions due to atmospheric water vapor and carbon dioxide within short horizontal and vertical distances from the calibration site due to the presence of what is believed to be a distinct microclimate by the lake. Subsequent to the development of this web site, a new reflectance calibration was derived which mitigated these effects. Reflectance spectra of bright areas of known composition in the East Tintic Mountains, far from Great Salt Lake, were sampled from the calibrated high altitude AVIRIS data cubes and edited, or "polished," to identify artifacts related to residual absorptions of atmospheric gases, particulates, and sensor noise. The subtle artifacts identified in this way were incorporated into the multiplier spectra derived from the original calibration site, generating new multiplier spectra that were used to re-calibrate the ATREM- and path radiance-corrected cubes to reflectance. This process generated a reflectance calibration customized for the Oquirrh/East Tintic Mountain region.
Ocean feature recognition using genetic algorithms with fuzzy fitness functions (GA/F3)
NASA Technical Reports Server (NTRS)
Ankenbrandt, C. A.; Buckles, B. P.; Petry, F. E.; Lybanon, M.
1990-01-01
A model for genetic algorithms with semantic nets is derived for which the relationships between concepts is depicted as a semantic net. An organism represents the manner in which objects in a scene are attached to concepts in the net. Predicates between object pairs are continuous valued truth functions in the form of an inverse exponential function (e sub beta lxl). 1:n relationships are combined via the fuzzy OR (Max (...)). Finally, predicates between pairs of concepts are resolved by taking the average of the combined predicate values of the objects attached to the concept at the tail of the arc representing the predicate in the semantic net. The method is illustrated by applying it to the identification of oceanic features in the North Atlantic.
Usage and applications of Semantic Web techniques and technologies to support chemistry research
2014-01-01
Background The drug discovery process is now highly dependent on the management, curation and integration of large amounts of potentially useful data. Semantics are necessary in order to interpret the information and derive knowledge. Advances in recent years have mitigated concerns that the lack of robust, usable tools has inhibited the adoption of methodologies based on semantics. Results This paper presents three examples of how Semantic Web techniques and technologies can be used in order to support chemistry research: a controlled vocabulary for quantities, units and symbols in physical chemistry; a controlled vocabulary for the classification and labelling of chemical substances and mixtures; and, a database of chemical identifiers. This paper also presents a Web-based service that uses the datasets in order to assist with the completion of risk assessment forms, along with a discussion of the legal implications and value-proposition for the use of such a service. Conclusions We have introduced the Semantic Web concepts, technologies, and methodologies that can be used to support chemistry research, and have demonstrated the application of those techniques in three areas very relevant to modern chemistry research, generating three new datasets that we offer as exemplars of an extensible portfolio of advanced data integration facilities. We have thereby established the importance of Semantic Web techniques and technologies for meeting Wild’s fourth “grand challenge”. PMID:24855494
Usage and applications of Semantic Web techniques and technologies to support chemistry research.
Borkum, Mark I; Frey, Jeremy G
2014-01-01
The drug discovery process is now highly dependent on the management, curation and integration of large amounts of potentially useful data. Semantics are necessary in order to interpret the information and derive knowledge. Advances in recent years have mitigated concerns that the lack of robust, usable tools has inhibited the adoption of methodologies based on semantics. THIS PAPER PRESENTS THREE EXAMPLES OF HOW SEMANTIC WEB TECHNIQUES AND TECHNOLOGIES CAN BE USED IN ORDER TO SUPPORT CHEMISTRY RESEARCH: a controlled vocabulary for quantities, units and symbols in physical chemistry; a controlled vocabulary for the classification and labelling of chemical substances and mixtures; and, a database of chemical identifiers. This paper also presents a Web-based service that uses the datasets in order to assist with the completion of risk assessment forms, along with a discussion of the legal implications and value-proposition for the use of such a service. We have introduced the Semantic Web concepts, technologies, and methodologies that can be used to support chemistry research, and have demonstrated the application of those techniques in three areas very relevant to modern chemistry research, generating three new datasets that we offer as exemplars of an extensible portfolio of advanced data integration facilities. We have thereby established the importance of Semantic Web techniques and technologies for meeting Wild's fourth "grand challenge".
Li, Qiao; Mark, Roger G; Clifford, Gari D
2009-01-01
Background Within the intensive care unit (ICU), arterial blood pressure (ABP) is typically recorded at different (and sometimes uneven) sampling frequencies, and from different sensors, and is often corrupted by different artifacts and noise which are often non-Gaussian, nonlinear and nonstationary. Extracting robust parameters from such signals, and providing confidences in the estimates is therefore difficult and requires an adaptive filtering approach which accounts for artifact types. Methods Using a large ICU database, and over 6000 hours of simultaneously acquired electrocardiogram (ECG) and ABP waveforms sampled at 125 Hz from a 437 patient subset, we documented six general types of ABP artifact. We describe a new ABP signal quality index (SQI), based upon the combination of two previously reported signal quality measures weighted together. One index measures morphological normality, and the other degradation due to noise. After extracting a 6084-hour subset of clean data using our SQI, we evaluated a new robust tracking algorithm for estimating blood pressure and heart rate (HR) based upon a Kalman Filter (KF) with an update sequence modified by the KF innovation sequence and the value of the SQI. In order to do this, we have created six novel models of different categories of artifacts that we have identified in our ABP waveform data. These artifact models were then injected into clean ABP waveforms in a controlled manner. Clinical blood pressure (systolic, mean and diastolic) estimates were then made from the ABP waveforms for both clean and corrupted data. The mean absolute error for systolic, mean and diastolic blood pressure was then calculated for different levels of artifact pollution to provide estimates of expected errors given a single value of the SQI. Results Our artifact models demonstrate that artifact types have differing effects on systolic, diastolic and mean ABP estimates. We show that, for most artifact types, diastolic ABP estimates are less noise-sensitive than mean ABP estimates, which in turn are more robust than systolic ABP estimates. We also show that our SQI can provide error bounds for both HR and ABP estimates. Conclusion The KF/SQI-fusion method described in this article was shown to provide an accurate estimate of blood pressure and HR derived from the ABP waveform even in the presence of high levels of persistent noise and artifact, and during extreme bradycardia and tachycardia. Differences in error between artifact types, measurement sensors and the quality of the source signal can be factored into physiological estimation using an unbiased adaptive filter, signal innovation and signal quality measures. PMID:19586547
Variable horizon in a peridynamic medium
Silling, Stewart A.; Littlewood, David J.; Seleson, Pablo
2015-12-10
Here, a notion of material homogeneity is proposed for peridynamic bodies with variable horizon but constant bulk properties. A relation is derived that scales the force state according to the position-dependent horizon while keeping the bulk properties unchanged. Using this scaling relation, if the horizon depends on position, artifacts called ghost forces may arise in a body under a homogeneous deformation. These artifacts depend on the second derivative of the horizon and can be reduced by employing a modified equilibrium equation using a new quantity called the partial stress. Bodies with piecewise constant horizon can be modeled without ghost forcesmore » by using a simpler technique called a splice. As a limiting case of zero horizon, both the partial stress and splice techniques can be used to achieve local-nonlocal coupling. Computational examples, including dynamic fracture in a one-dimensional model with local-nonlocal coupling, illustrate the methods.« less
Detecting Inconsistencies in Multi-View Models with Variability
NASA Astrophysics Data System (ADS)
Lopez-Herrejon, Roberto Erick; Egyed, Alexander
Multi-View Modeling (MVM) is a common modeling practice that advocates the use of multiple, different and yet related models to represent the needs of diverse stakeholders. Of crucial importance in MVM is consistency checking - the description and verification of semantic relationships amongst the views. Variability is the capacity of software artifacts to vary, and its effective management is a core tenet of the research in Software Product Lines (SPL). MVM has proven useful for developing one-of-a-kind systems; however, to reap the potential benefits of MVM in SPL it is vital to provide consistency checking mechanisms that cope with variability. In this paper we describe how to address this need by applying Safe Composition - the guarantee that all programs of a product line are type safe. We evaluate our approach with a case study.
ERIC Educational Resources Information Center
Hilchey, Christian Thomas
2014-01-01
This dissertation examines prefixation of simplex pairs. A simplex pair consists of an iterative imperfective and a semelfactive perfective verb. When prefixed, both of these verbs are perfective. The prefixed forms derived from semelfactives are labeled single act verbs, while the prefixed forms derived from iterative imperfective simplex verbs…
2014-01-06
products derived from this funding. This includes two proposed activities for Summer 2014: • Deep Semantic Annotation with Shallow Methods; James... process that we need to ensure that words are unambiguous before we read them (present in just the semantic field that is presently active). Publication...Technical Report). MIT Artificial Intelligence Laboratory. Allen, J., Manshadi, M., Dzikovska, M., & Swift, M. (2007). Deep linguistic processing for
Improved UTE-based attenuation correction for cranial PET-MR using dynamic magnetic field monitoring
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aitken, A. P.; Giese, D.; Tsoumpas, C.
2014-01-15
Purpose: Ultrashort echo time (UTE) MRI has been proposed as a way to produce segmented attenuation maps for PET, as it provides contrast between bone, air, and soft tissue. However, UTE sequences require samples to be acquired during rapidly changing gradient fields, which makes the resulting images prone to eddy current artifacts. In this work it is demonstrated that this can lead to misclassification of tissues in segmented attenuation maps (AC maps) and that these effects can be corrected for by measuring the true k-space trajectories using a magnetic field camera. Methods: The k-space trajectories during a dual echo UTEmore » sequence were measured using a dynamic magnetic field camera. UTE images were reconstructed using nominal trajectories and again using the measured trajectories. A numerical phantom was used to demonstrate the effect of reconstructing with incorrect trajectories. Images of an ovine leg phantom were reconstructed and segmented and the resulting attenuation maps were compared to a segmented map derived from a CT scan of the same phantom, using the Dice similarity measure. The feasibility of the proposed method was demonstrated inin vivo cranial imaging in five healthy volunteers. Simulated PET data were generated for one volunteer to show the impact of misclassifications on the PET reconstruction. Results: Images of the numerical phantom exhibited blurring and edge artifacts on the bone–tissue and air–tissue interfaces when nominal k-space trajectories were used, leading to misclassification of soft tissue as bone and misclassification of bone as air. Images of the tissue phantom and thein vivo cranial images exhibited the same artifacts. The artifacts were greatly reduced when the measured trajectories were used. For the tissue phantom, the Dice coefficient for bone in MR relative to CT was 0.616 using the nominal trajectories and 0.814 using the measured trajectories. The Dice coefficients for soft tissue were 0.933 and 0.934 for the nominal and measured cases, respectively. For air the corresponding figures were 0.991 and 0.993. Compared to an unattenuated reference image, the mean error in simulated PET uptake in the brain was 9.16% when AC maps derived from nominal trajectories was used, with errors in the SUV{sub max} for simulated lesions in the range of 7.17%–12.19%. Corresponding figures when AC maps derived from measured trajectories were used were 0.34% (mean error) and −0.21% to +1.81% (lesions). Conclusions: Eddy current artifacts in UTE imaging can be corrected for by measuring the true k-space trajectories during a calibration scan and using them in subsequent image reconstructions. This improves the accuracy of segmented PET attenuation maps derived from UTE sequences and subsequent PET reconstruction.« less
Depeursinge, Adrien; Kurtz, Camille; Beaulieu, Christopher F.; Napel, Sandy; Rubin, Daniel L.
2014-01-01
We describe a framework to model visual semantics of liver lesions in CT images in order to predict the visual semantic terms (VST) reported by radiologists in describing these lesions. Computational models of VST are learned from image data using high–order steerable Riesz wavelets and support vector machines (SVM). The organization of scales and directions that are specific to every VST are modeled as linear combinations of directional Riesz wavelets. The models obtained are steerable, which means that any orientation of the model can be synthesized from linear combinations of the basis filters. The latter property is leveraged to model VST independently from their local orientation. In a first step, these models are used to predict the presence of each semantic term that describes liver lesions. In a second step, the distances between all VST models are calculated to establish a non–hierarchical computationally–derived ontology of VST containing inter–term synonymy and complementarity. A preliminary evaluation of the proposed framework was carried out using 74 liver lesions annotated with a set of 18 VSTs from the RadLex ontology. A leave–one–patient–out cross–validation resulted in an average area under the ROC curve of 0.853 for predicting the presence of each VST when using SVMs in a feature space combining the magnitudes of the steered models with CT intensities. Likelihood maps are created for each VST, which enables high transparency of the information modeled. The computationally–derived ontology obtained from the VST models was found to be consistent with the underlying semantics of the visual terms. It was found to be complementary to the RadLex ontology, and constitutes a potential method to link the image content to visual semantics. The proposed framework is expected to foster human–computer synergies for the interpretation of radiological images while using rotation–covariant computational models of VSTs to (1) quantify their local likelihood and (2) explicitly link them with pixel–based image content in the context of a given imaging domain. PMID:24808406
Perceptual video quality assessment in H.264 video coding standard using objective modeling.
Karthikeyan, Ramasamy; Sainarayanan, Gopalakrishnan; Deepa, Subramaniam Nachimuthu
2014-01-01
Since usage of digital video is wide spread nowadays, quality considerations have become essential, and industry demand for video quality measurement is rising. This proposal provides a method of perceptual quality assessment in H.264 standard encoder using objective modeling. For this purpose, quality impairments are calculated and a model is developed to compute the perceptual video quality metric based on no reference method. Because of the shuttle difference between the original video and the encoded video the quality of the encoded picture gets degraded, this quality difference is introduced by the encoding process like Intra and Inter prediction. The proposed model takes into account of the artifacts introduced by these spatial and temporal activities in the hybrid block based coding methods and an objective modeling of these artifacts into subjective quality estimation is proposed. The proposed model calculates the objective quality metric using subjective impairments; blockiness, blur and jerkiness compared to the existing bitrate only calculation defined in the ITU G 1070 model. The accuracy of the proposed perceptual video quality metrics is compared against popular full reference objective methods as defined by VQEG.
Kistner, Kelly
2014-12-01
Between 1838 and 1863 the Grimm brothers led a collaborative research project to create a new kind of dictionary documenting the history of the German language. They imagined the work would present a scientific account of linguistic cohesiveness and strengthen German unity. However, their dictionary volumes (most of which were arranged and written by Jacob Grimm) would be variously criticized for their idiosyncratic character and ultimately seen as a poor, and even prejudicial, piece of scholarship. This paper argues that such criticisms may reflect a misunderstanding of the dictionary. I claim it can be best understood as an artifact of romanticist science and its epistemological privileging of subjective perception coupled with a deeply-held faith in inter-subjective congruence. Thus situated, it is a rare and detailed case of Romantic ideas and ideals applied to the scientific study of social artifacts. Moreover, the dictionary's organization, reception, and legacy provide insights into the changing landscape of scientific practice in Germany, showcasing the difficulties of implementing a romanticist vision of science amidst widening gaps between the public and professionals, generalists and specialists.
Ryali, S; Glover, GH; Chang, C; Menon, V
2009-01-01
EEG data acquired in an MRI scanner are heavily contaminated by gradient artifacts that can significantly compromise signal quality. We developed two new methods based on Independent Component Analysis (ICA) for reducing gradient artifacts from spiral in-out and echo-planar pulse sequences at 3T, and compared our algorithms with four other commonly used methods: average artifact subtraction (Allen et al. 2000), principal component analysis (Niazy et al. 2005), Taylor series (Wan et al. 2006) and a conventional temporal ICA algorithm. Models of gradient artifacts were derived from simulations as well as a water phantom and performance of each method was evaluated on datasets constructed using visual event-related potentials (ERPs) as well as resting EEG. Our new methods recovered ERPs and resting EEG below the beta band (< 12.5 Hz) with high signal-to-noise ratio (SNR > 4). Our algorithms outperformed all of these methods on resting EEG in the theta- and alpha-bands (SNR > 4); however, for all methods, signal recovery was modest (SNR ~ 1) in the beta-band and poor (SNR < 0.3) in the gamma-band and above. We found that the conventional ICA algorithm performed poorly with uniformly low SNR (< 0.1). Taken together, our new ICA-based methods offer a more robust technique for gradient artifact reduction when scanning at 3T using spiral in-out and echo-planar pulse sequences. We provide new insights into the strengths and weaknesses of each method using a unified subspace framework. PMID:19580873
,
2002-01-01
The National Elevation Dataset (NED) is a new raster product assembled by the U.S. Geological Survey. NED is designed to provide National elevation data in a seamless form with a consistent datum, elevation unit, and projection. Data corrections were made in the NED assembly process to minimize artifacts, perform edge matching, and fill sliver areas of missing data. NED has a resolution of one arc-second (approximately 30 meters) for the conterminous United States, Hawaii, Puerto Rico and the island territories and a resolution of two arc-seconds for Alaska. NED data sources have a variety of elevation units, horizontal datums, and map projections. In the NED assembly process the elevation values are converted to decimal meters as a consistent unit of measure, NAD83 is consistently used as horizontal datum, and all the data are recast in a geographic projection. Older DEM's produced by methods that are now obsolete have been filtered during the NED assembly process to minimize artifacts that are commonly found in data produced by these methods. Artifact removal greatly improves the quality of the slope, shaded-relief, and synthetic drainage information that can be derived from the elevation data. Figure 2 illustrates the results of this artifact removal filtering. NED processing also includes steps to adjust values where adjacent DEM's do not match well, and to fill sliver areas of missing data between DEM's. These processing steps ensure that NED has no void areas and artificial discontinuities have been minimized. The artifact removal filtering process does not eliminate all of the artifacts. In areas where the only available DEM is produced by older methods, then "striping" may still occur.
Intelligent artifact classification for ambulatory physiological signals.
Sweeney, Kevin T; Leamy, Darren J; Ward, Tomas E; McLoone, Sean
2010-01-01
Connected health represents an increasingly important model for health-care delivery. The concept is heavily reliant on technology and in particular remote physiological monitoring. One of the principal challenges is the maintenance of high quality data streams which must be collected with minimally intrusive, inexpensive sensor systems operating in difficult conditions. Ambulatory monitoring represents one of the most challenging signal acquisition challenges of all in that data is collected as the patient engages in normal activities of everyday living. Data thus collected suffers from considerable corruption as a result of artifact, much of it induced by motion and this has a bearing on its utility for diagnostic purposes. We propose a model for ambulatory signal recording in which the data collected is accompanied by labeling indicating the quality of the collected signal. As motion is such an important source of artifact we demonstrate the concept in this case with a quality of signal measure derived from motion sensing technology viz. accelerometers. We further demonstrate how different types of artifact might be tagged to inform artifact reduction signal processing elements during subsequent signal analysis. This is demonstrated through the use of multiple accelerometers which allow the algorithm to distinguish between disturbance of the sensor relative to the underlying tissue and movement of this tissue. A brain monitoring experiment utilizing EEG and fNIRS is used to illustrate the concept.
Automated reference-free detection of motion artifacts in magnetic resonance images.
Küstner, Thomas; Liebgott, Annika; Mauch, Lukas; Martirosian, Petros; Bamberg, Fabian; Nikolaou, Konstantin; Yang, Bin; Schick, Fritz; Gatidis, Sergios
2018-04-01
Our objectives were to provide an automated method for spatially resolved detection and quantification of motion artifacts in MR images of the head and abdomen as well as a quality control of the trained architecture. T1-weighted MR images of the head and the upper abdomen were acquired in 16 healthy volunteers under rest and under motion. Images were divided into overlapping patches of different sizes achieving spatial separation. Using these patches as input data, a convolutional neural network (CNN) was trained to derive probability maps for the presence of motion artifacts. A deep visualization offers a human-interpretable quality control of the trained CNN. Results were visually assessed on probability maps and as classification accuracy on a per-patch, per-slice and per-volunteer basis. On visual assessment, a clear difference of probability maps was observed between data sets with and without motion. The overall accuracy of motion detection on a per-patch/per-volunteer basis reached 97%/100% in the head and 75%/100% in the abdomen, respectively. Automated detection of motion artifacts in MRI is feasible with good accuracy in the head and abdomen. The proposed method provides quantification and localization of artifacts as well as a visualization of the learned content. It may be extended to other anatomic areas and used for quality assurance of MR images.
Frequency-Wavenumber (FK)-Based Data Selection in High-Frequency Passive Surface Wave Survey
NASA Astrophysics Data System (ADS)
Cheng, Feng; Xia, Jianghai; Xu, Zongbo; Hu, Yue; Mi, Binbin
2018-04-01
Passive surface wave methods have gained much attention from geophysical and civil engineering communities because of the limited application of traditional seismic surveys in highly populated urban areas. Considering that they can provide high-frequency phase velocity information up to several tens of Hz, the active surface wave survey would be omitted and the amount of field work could be dramatically reduced. However, the measured dispersion energy image in the passive surface wave survey would usually be polluted by a type of "crossed" artifacts at high frequencies. It is common in the bidirectional noise distribution case with a linear receiver array deployed along roads or railways. We review several frequently used passive surface wave methods and derive the underlying physics for the existence of the "crossed" artifacts. We prove that the "crossed" artifacts would cross the true surface wave energy at fixed points in the f-v domain and propose a FK-based data selection technique to attenuate the artifacts in order to retrieve the high-frequency information. Numerical tests further demonstrate the existence of the "crossed" artifacts and indicate that the well-known wave field separation method, FK filter, does not work for the selection of directional noise data. Real-world applications manifest the feasibility of the proposed FK-based technique to improve passive surface wave methods by a priori data selection. Finally, we discuss the applicability of our approach.
Frequency-Wavenumber (FK)-Based Data Selection in High-Frequency Passive Surface Wave Survey
NASA Astrophysics Data System (ADS)
Cheng, Feng; Xia, Jianghai; Xu, Zongbo; Hu, Yue; Mi, Binbin
2018-07-01
Passive surface wave methods have gained much attention from geophysical and civil engineering communities because of the limited application of traditional seismic surveys in highly populated urban areas. Considering that they can provide high-frequency phase velocity information up to several tens of Hz, the active surface wave survey would be omitted and the amount of field work could be dramatically reduced. However, the measured dispersion energy image in the passive surface wave survey would usually be polluted by a type of "crossed" artifacts at high frequencies. It is common in the bidirectional noise distribution case with a linear receiver array deployed along roads or railways. We review several frequently used passive surface wave methods and derive the underlying physics for the existence of the "crossed" artifacts. We prove that the "crossed" artifacts would cross the true surface wave energy at fixed points in the f- v domain and propose a FK-based data selection technique to attenuate the artifacts in order to retrieve the high-frequency information. Numerical tests further demonstrate the existence of the "crossed" artifacts and indicate that the well-known wave field separation method, FK filter, does not work for the selection of directional noise data. Real-world applications manifest the feasibility of the proposed FK-based technique to improve passive surface wave methods by a priori data selection. Finally, we discuss the applicability of our approach.
Satellite Derived Volcanic Ash Product Inter-Comparison in Support to SCOPE-Nowcasting
NASA Astrophysics Data System (ADS)
Siddans, Richard; Thomas, Gareth; Pavolonis, Mike; Bojinski, Stephan
2016-04-01
In support of aeronautical meteorological services, WMO organized a satellite-based volcanic ash retrieval algorithm inter-comparison activity, to improve the consistency of quantitative volcanic ash products from satellites, under the Sustained, Coordinated Processing of Environmental Satellite Data for Nowcasting (SCOPEe Nowcasting) initiative (http:/ jwww.wmo.int/pagesjprogjsatjscopee nowcasting_en.php). The aims of the intercomparison were as follows: 1. Select cases (Sarychev Peak 2009, Eyjafyallajökull 2010, Grimsvötn 2011, Puyehue-Cordón Caulle 2011, Kirishimayama 2011, Kelut 2014), and quantify the differences between satellite-derived volcanic ash cloud properties derived from different techniques and sensors; 2. Establish a basic validation protocol for satellite-derived volcanic ash cloud properties; 3. Document the strengths and weaknesses of different remote sensing approaches as a function of satellite sensor; 4. Standardize the units and quality flags associated with volcanic cloud geophysical parameters; 5. Provide recommendations to Volcanic Ash Advisory Centers (VAACs) and other users on how to best to utilize quantitative satellite products in operations; 6. Create a "road map" for future volcanic ash related scientific developments and inter-comparison/validation activities that can also be applied to SO2 clouds and emergent volcanic clouds. Volcanic ash satellite remote sensing experts from operational and research organizations were encouraged to participate in the inter-comparison activity, to establish the plans for the inter-comparison and to submit data sets. RAL was contracted by EUMETSAT to perform a systematic inter-comparison of all submitted datasets and results were reported at the WMO International Volcanic Ash Inter-comparison Meeting to held on 29 June - 2 July 2015 in Madison, WI, USA (http:/ /cimss.ssec.wisc.edujmeetings/vol_ash14). 26 different data sets were submitted, from a range of passive imagers and spectrometers and these were inter-compared against each other and against validation data such as CALIPSO lidar, ground-based lidar and aircraft observations. Results of the comparison exercise will be presented together with the conclusions and recommendations arising from the activity.
NASA Astrophysics Data System (ADS)
Strohmeier, Dominik; Kunze, Kristina; Göbel, Klemens; Liebetrau, Judith
2013-01-01
Assessing audiovisual Quality of Experience (QoE) is a key element to ensure quality acceptance of today's multimedia products. The use of descriptive evaluation methods allows evaluating QoE preferences and the underlying QoE features jointly. From our previous evaluations on QoE for mobile 3D video we found that mainly one dimension, video quality, dominates the descriptive models. Large variations of the visual video quality in the tests may be the reason for these findings. A new study was conducted to investigate whether test sets of low QoE are described differently than those of high audiovisual QoE. Reanalysis of previous data sets seems to confirm this hypothesis. Our new study consists of a pre-test and a main test, using the Descriptive Sorted Napping method. Data sets of good-only and bad-only video quality were evaluated separately. The results show that the perception of bad QoE is mainly determined one-dimensionally by visual artifacts, whereas the perception of good quality shows multiple dimensions. Here, mainly semantic-related features of the content and affective descriptors are used by the naïve test participants. The results show that, with increasing QoE of audiovisual systems, content semantics and users' a_ective involvement will become important for assessing QoE differences.
Visual context modulates potentiation of grasp types during semantic object categorization.
Kalénine, Solène; Shapiro, Allison D; Flumini, Andrea; Borghi, Anna M; Buxbaum, Laurel J
2014-06-01
Substantial evidence suggests that conceptual processing of manipulable objects is associated with potentiation of action. Such data have been viewed as evidence that objects are recognized via access to action features. Many objects, however, are associated with multiple actions. For example, a kitchen timer may be clenched with a power grip to move it but pinched with a precision grip to use it. The present study tested the hypothesis that action evocation during conceptual object processing is responsive to the visual scene in which objects are presented. Twenty-five healthy adults were asked to categorize object pictures presented in different naturalistic visual contexts that evoke either move- or use-related actions. Categorization judgments (natural vs. artifact) were performed by executing a move- or use-related action (clench vs. pinch) on a response device, and response times were assessed as a function of contextual congruence. Although the actions performed were irrelevant to the categorization judgment, responses were significantly faster when actions were compatible with the visual context. This compatibility effect was largely driven by faster pinch responses when objects were presented in use-compatible, as compared with move-compatible, contexts. The present study is the first to highlight the influence of visual scene on stimulus-response compatibility effects during semantic object processing. These data support the hypothesis that action evocation during conceptual object processing is biased toward context-relevant actions.
Visual context modulates potentiation of grasp types during semantic object categorization
Kalénine, Solène; Shapiro, Allison D.; Flumini, Andrea; Borghi, Anna M.; Buxbaum, Laurel J.
2013-01-01
Substantial evidence suggests that conceptual processing of manipulable objects is associated with potentiation of action. Such data have been viewed as evidence that objects are recognized via access to action features. Many objects, however, are associated with multiple actions. For example, a kitchen timer may be clenched with a power grip to move it, but pinched with a precision grip to use it. The present study tested the hypothesis that action evocation during conceptual object processing is responsive to the visual scene in which objects are presented. Twenty-five healthy adults were asked to categorize object pictures presented in different naturalistic visual contexts that evoke either move- or use-related actions. Categorization judgments (natural vs. artifact) were performed by executing a move- or use-related action (clench vs. pinch) on a response device, and response times were assessed as a function of contextual congruence. Although the actions performed were irrelevant to the categorization judgment, responses were significantly faster when actions were compatible with the visual context. This compatibility effect was largely driven by faster pinch responses when objects were presented in use- compared to move-compatible contexts. The present study is the first to highlight the influence of visual scene on stimulus-response compatibility effects during semantic object processing. These data support the hypothesis that action evocation during conceptual object processing is biased toward context-relevant actions. PMID:24186270
Frauscher, Birgit; Gabelia, David; Biermayr, Marlene; Stefani, Ambra; Hackner, Heinz; Mitterling, Thomas; Poewe, Werner; Högl, Birgit
2014-10-01
Rapid eye movement sleep without atonia (RWA) is the polysomnographic hallmark of REM sleep behavior disorder (RBD). To partially overcome the disadvantages of manual RWA scoring, which is time consuming but essential for the accurate diagnosis of RBD, we aimed to validate software specifically developed and integrated with polysomnography for RWA detection against the gold standard of manual RWA quantification. Academic referral center sleep laboratory. Polysomnographic recordings of 20 patients with RBD and 60 healthy volunteers were analyzed. N/A. Motor activity during REM sleep was quantified manually and computer assisted (with and without artifact detection) according to Sleep Innsbruck Barcelona (SINBAR) criteria for the mentalis ("any," phasic, tonic electromyographic [EMG] activity) and the flexor digitorum superficialis (FDS) muscle (phasic EMG activity). Computer-derived indices (with and without artifact correction) for "any," phasic, tonic mentalis EMG activity, phasic FDS EMG activity, and the SINBAR index ("any" mentalis + phasic FDS) correlated well with the manually derived indices (all Spearman rhos 0.66-0.98). In contrast with computerized scoring alone, computerized scoring plus manual artifact correction (median duration 5.4 min) led to a significant reduction of false positives for "any" mentalis (40%), phasic mentalis (40.6%), and the SINBAR index (41.2%). Quantification of tonic mentalis and phasic FDS EMG activity was not influenced by artifact correction. The computer algorithm used here appears to be a promising tool for REM sleep behavior disorder detection in both research and clinical routine. A short check for plausibility of automatic detection should be a basic prerequisite for this and all other available computer algorithms. © 2014 Associated Professional Sleep Societies, LLC.
Effects of diurnal adjustment on biases and trends derived from inter-sensor calibrated AMSU-A data
NASA Astrophysics Data System (ADS)
Chen, H.; Zou, X.; Qin, Z.
2018-03-01
Measurements of brightness temperatures from Advanced Microwave Sounding Unit-A (AMSU-A) temperature sounding instruments onboard NOAA Polarorbiting Operational Environmental Satellites (POES) have been extensively used for studying atmospheric temperature trends over the past several decades. Intersensor biases, orbital drifts and diurnal variations of atmospheric and surface temperatures must be considered before using a merged long-term time series of AMSU-A measurements from NOAA-15, -18, -19 and MetOp-A.We study the impacts of the orbital drift and orbital differences of local equator crossing times (LECTs) on temperature trends derivable from AMSU-A using near-nadir observations from NOAA-15, NOAA-18, NOAA-19, and MetOp-A during 1998-2014 over the Amazon rainforest. The double difference method is firstly applied to estimation of inter-sensor biases between any two satellites during their overlapping time period. The inter-calibrated observations are then used to generate a monthly mean diurnal cycle of brightness temperature for each AMSU-A channel. A diurnal correction is finally applied each channel to obtain AMSU-A data valid at the same local time. Impacts of the inter-sensor bias correction and diurnal correction on the AMSU-A derived long-term atmospheric temperature trends are separately quantified and compared with those derived from original data. It is shown that the orbital drift and differences of LECTamong different POESs induce a large uncertainty in AMSU-A derived long-term warming/cooling trends. After applying an inter-sensor bias correction and a diurnal correction, the warming trends at different local times, which are approximately the same, are smaller by half than the trends derived without applying these corrections.
Johansson, Adam; Balter, James; Cao, Yue
2018-03-01
Respiratory motion can affect pharmacokinetic perfusion parameters quantified from liver dynamic contrast-enhanced MRI. Image registration can be used to align dynamic images after reconstruction. However, intra-image motion blur remains after alignment and can alter the shape of contrast-agent uptake curves. We introduce a method to correct for inter- and intra-image motion during image reconstruction. Sixteen liver dynamic contrast-enhanced MRI examinations of nine subjects were performed using a golden-angle stack-of-stars sequence. For each examination, an image time series with high temporal resolution but severe streak artifacts was reconstructed. Images were aligned using region-limited rigid image registration within a region of interest covering the liver. The transformations resulting from alignment were used to correct raw data for motion by modulating and rotating acquired lines in k-space. The corrected data were then reconstructed using view sharing. Portal-venous input functions extracted from motion-corrected images had significantly greater peak signal enhancements (mean increase: 16%, t-test, P < 0.001) than those from images aligned using image registration after reconstruction. In addition, portal-venous perfusion maps estimated from motion-corrected images showed fewer artifacts close to the edge of the liver. Motion-corrected image reconstruction restores uptake curves distorted by motion. Motion correction also reduces motion artifacts in estimated perfusion parameter maps. Magn Reson Med 79:1345-1353, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
Safi, Yaser; Aghdasi, Mohammad Mehdi; Ezoddini-Ardakani, Fatemeh; Beiraghi, Samira; Vasegh, Zahra
2015-01-01
Vertical root fracture (VRF) is common in endodontically treated teeth. Conventional and digital radiographies have limitations for detection of VRFs. Cone-beam computed tomography (CBCT) offers greater detection accuracy of VRFs in comparison with conventional radiography. This study compared the effects of metal artifacts on detection of VRFs by using two CBCT systems. Eighty extracted premolars were selected and sectioned at the level of the cemento enamel junction (CEJ). After preparation, root canals were filled with gutta-percha. Subsequently, two thirds of the root fillings were removed for post space preparation and a custom-made post was cemented into each canal. The teeth were randomly divided into two groups (n=40). In the test group, root fracture was created with Instron universal testing machine. The control teeth remained intact. CBCT scans of all teeth were obtained with either New Tom VGI or Soredex Scanora 3D. Three observers analyzed the images for detection of VRF. The sensitivity, specificity, positive predictive value (PPV) and negative predictive value (NPV) for VRF detection and percentage of probable cases were calculated for each imaging system and compared using non-parametric tests considering the non-normal distribution of data. The inter-observer reproducibility was calculated using the weighted kappa coefficient. There were no statistically significant differences in sensitivity, specificity, PPV and NPV between the two CBCT systems. The effect of metal artifacts on VRF detection was not significantly different between the two CBCT systems.
Combined use of iterative reconstruction and monochromatic imaging in spinal fusion CT images.
Wang, Fengdan; Zhang, Yan; Xue, Huadan; Han, Wei; Yang, Xianda; Jin, Zhengyu; Zwar, Richard
2017-01-01
Spinal fusion surgery is an important procedure for treating spinal diseases and computed tomography (CT) is a critical tool for postoperative evaluation. However, CT image quality is considerably impaired by metal artifacts and image noise. To explore whether metal artifacts and image noise can be reduced by combining two technologies, adaptive statistical iterative reconstruction (ASIR) and monochromatic imaging generated by gemstone spectral imaging (GSI) dual-energy CT. A total of 51 patients with 318 spinal pedicle screws were prospectively scanned by dual-energy CT using fast kV-switching GSI between 80 and 140 kVp. Monochromatic GSI images at 110 keV were reconstructed either without or with various levels of ASIR (30%, 50%, 70%, and 100%). The quality of five sets of images was objectively and subjectively assessed. With objective image quality assessment, metal artifacts decreased when increasing levels of ASIR were applied (P < 0.001). Moreover, adding ASIR to GSI also decreased image noise (P < 0.001) and improved the signal-to-noise ratio (P < 0.001). The subjective image quality analysis showed good inter-reader concordance, with intra-class correlation coefficients between 0.89 and 0.99. The visualization of peri-implant soft tissue was improved at higher ASIR levels (P < 0.001). Combined use of ASIR and GSI decreased image noise and improved image quality in post-spinal fusion CT scans. Optimal results were achieved with ASIR levels ≥70%. © The Foundation Acta Radiologica 2016.
NASA Astrophysics Data System (ADS)
Alani, Harith; Szomszor, Martin; Cattuto, Ciro; van den Broeck, Wouter; Correndo, Gianluca; Barrat, Alain
Social interactions are one of the key factors to the success of conferences and similar community gatherings. This paper describes a novel application that integrates data from the semantic web, online social networks, and a real-world contact sensing platform. This application was successfully deployed at ESWC09, and actively used by 139 people. Personal profiles of the participants were automatically generated using several Web 2.0 systems and semantic academic data sources, and integrated in real-time with face-to-face contact networks derived from wearable sensors. Integration of all these heterogeneous data layers made it possible to offer various services to conference attendees to enhance their social experience such as visualisation of contact data, and a site to explore and connect with other participants. This paper describes the architecture of the application, the services we provided, and the results we achieved in this deployment.
NASA Technical Reports Server (NTRS)
Kwadrat, Carl F.; Horne, William D.; Edwards, Bernard L.
2002-01-01
In order to avoid selecting inadequate inter-spacecraft cross-link communications standards for Distributed Spacecraft System (DSS) missions, it is first necessary to identify cross-link communications strategies and requirements common to a cross-section of proposed missions. This paper addresses the cross-link communication strategies and requirements derived from a survey of 39 DSS mission descriptions that are projected for potential launch within the next 20 years. The inter-spacecraft communications strategies presented are derived from the topological and communications constraints from the DSS missions surveyed. Basic functional requirements are derived from an analysis of the fundamental activities that must be undertaken to establish and maintain a cross-link between two DSS spacecraft. Cross-link bandwidth requirements are derived from high-level assessments of mission science objectives and operations concepts. Finally, a preliminary assessment of possible cross-link standards is presented within the context of the basic operational and interoperability requirements.
Investigation of the halo-artifact in 68Ga-PSMA-11-PET/MRI.
Heußer, Thorsten; Mann, Philipp; Rank, Christopher M; Schäfer, Martin; Dimitrakopoulou-Strauss, Antonia; Schlemmer, Heinz-Peter; Hadaschik, Boris A; Kopka, Klaus; Bachert, Peter; Kachelrieß, Marc; Freitag, Martin T
2017-01-01
Combined positron emission tomography (PET) and magnetic resonance imaging (MRI) targeting the prostate-specific membrane antigen (PSMA) with a 68Ga-labelled PSMA-analog (68Ga-PSMA-11) is discussed as a promising diagnostic method for patients with suspicion or history of prostate cancer. One potential drawback of this method are severe photopenic (halo-) artifacts surrounding the bladder and the kidneys in the scatter-corrected PET images, which have been reported to occur frequently in clinical practice. The goal of this work was to investigate the occurrence and impact of these artifacts and, secondly, to evaluate variants of the standard scatter correction method with regard to halo-artifact suppression. Experiments using a dedicated pelvis phantom were conducted to investigate whether the halo-artifact is modality-, tracer-, and/or concentration-dependent. Furthermore, 31 patients with history of prostate cancer were selected from an ongoing 68Ga-PSMA-11-PET/MRI study. For each patient, PET raw data were reconstructed employing six different variants of PET scatter correction: absolute scatter scaling, relative scatter scaling, and relative scatter scaling combined with prompt gamma correction, each of which was combined with a maximum scatter fraction (MaxSF) of MaxSF = 75% or MaxSF = 40%. Evaluation of the reconstructed images with regard to halo-artifact suppression was performed both quantitatively using statistical analysis and qualitatively by two independent readers. The phantom experiments did not reveal any modality-dependency (PET/MRI vs. PET/CT) or tracer-dependency (68Ga vs. 18F-FDG). Patient- and phantom-based data indicated that halo-artifacts derive from high organ-to-background activity ratios (OBR) between bladder/kidneys and surrounding soft tissue, with a positive correlation between OBR and halo size. Comparing different variants of scatter correction, reducing the maximum scatter fraction from the default value MaxSF = 75% to MaxSF = 40% was found to efficiently suppress halo-artifacts in both phantom and patient data. In 1 of 31 patients, reducing the maximum scatter fraction provided new PET-based information changing the patient's diagnosis. Halo-artifacts are particularly observed for 68Ga-PSMA-11-PET/MRI due to 1) the biodistribution of the PSMA-11-tracer resulting in large OBRs for bladder and kidneys and 2) inaccurate scatter correction methods currently used in clinical routine, which tend to overestimate the scatter contribution. If not compensated for, 68Ga-PSMA-11 uptake pathologies may be masked by halo-artifacts leading to false-negative diagnoses. Reducing the maximum scatter fraction was found to efficiently suppress halo-artifacts.
Investigation of the halo-artifact in 68Ga-PSMA-11-PET/MRI
Rank, Christopher M.; Schäfer, Martin; Dimitrakopoulou-Strauss, Antonia; Schlemmer, Heinz-Peter; Hadaschik, Boris A.; Kopka, Klaus; Bachert, Peter; Kachelrieß, Marc
2017-01-01
Objectives Combined positron emission tomography (PET) and magnetic resonance imaging (MRI) targeting the prostate-specific membrane antigen (PSMA) with a 68Ga-labelled PSMA-analog (68Ga-PSMA-11) is discussed as a promising diagnostic method for patients with suspicion or history of prostate cancer. One potential drawback of this method are severe photopenic (halo-) artifacts surrounding the bladder and the kidneys in the scatter-corrected PET images, which have been reported to occur frequently in clinical practice. The goal of this work was to investigate the occurrence and impact of these artifacts and, secondly, to evaluate variants of the standard scatter correction method with regard to halo-artifact suppression. Methods Experiments using a dedicated pelvis phantom were conducted to investigate whether the halo-artifact is modality-, tracer-, and/or concentration-dependent. Furthermore, 31 patients with history of prostate cancer were selected from an ongoing 68Ga-PSMA-11-PET/MRI study. For each patient, PET raw data were reconstructed employing six different variants of PET scatter correction: absolute scatter scaling, relative scatter scaling, and relative scatter scaling combined with prompt gamma correction, each of which was combined with a maximum scatter fraction (MaxSF) of MaxSF = 75% or MaxSF = 40%. Evaluation of the reconstructed images with regard to halo-artifact suppression was performed both quantitatively using statistical analysis and qualitatively by two independent readers. Results The phantom experiments did not reveal any modality-dependency (PET/MRI vs. PET/CT) or tracer-dependency (68Ga vs. 18F-FDG). Patient- and phantom-based data indicated that halo-artifacts derive from high organ-to-background activity ratios (OBR) between bladder/kidneys and surrounding soft tissue, with a positive correlation between OBR and halo size. Comparing different variants of scatter correction, reducing the maximum scatter fraction from the default value MaxSF = 75% to MaxSF = 40% was found to efficiently suppress halo-artifacts in both phantom and patient data. In 1 of 31 patients, reducing the maximum scatter fraction provided new PET-based information changing the patient’s diagnosis. Conclusion Halo-artifacts are particularly observed for 68Ga-PSMA-11-PET/MRI due to 1) the biodistribution of the PSMA-11-tracer resulting in large OBRs for bladder and kidneys and 2) inaccurate scatter correction methods currently used in clinical routine, which tend to overestimate the scatter contribution. If not compensated for, 68Ga-PSMA-11 uptake pathologies may be masked by halo-artifacts leading to false-negative diagnoses. Reducing the maximum scatter fraction was found to efficiently suppress halo-artifacts. PMID:28817656
Carlesimo, Giovanni A; Bonanni, Rita; Caltagirone, Carlo
2003-05-01
This study investigated the hypothesis that brain damaged patients with memory disorder are poorer at remembering the semantic than the perceptual attributes of information. Eight patients with memory impairment of different etiology and 24 patients with chronic consequences of severe closed-head injury were compared to similarly sized age- and literacy-matched normal control groups on recognition tests for the physical aspect and the semantic identity of words and pictures lists. In order to avoid interpretative problems deriving from different absolute levels of performance, study conditions were manipulated across subjects to obtain comparable accuracy on the perceptual recognition tests in the memory disordered and control groups. The results of the Picture Recognition test were consistent with the hypothesis. Indeed, having more time for the stimulus encoding, the two memory disordered groups performed at the same level as the normal subjects on the perceptual test but significantly lower on the semantic test. Instead, on the Word Recognition test, following study condition manipulation, patients and controls performed similarly on both the perceptual and the semantic tests. These data only partially support the hypothesis of the study; rather they suggest that in memory disordered patients there is a reduction of the advantage, exhibited by normal controls, of retrieving pictures over words (picture superiority effect).
The semantics of prosody: acoustic and perceptual evidence of prosodic correlates to word meaning.
Nygaard, Lynne C; Herold, Debora S; Namy, Laura L
2009-01-01
This investigation examined whether speakers produce reliable prosodic correlates to meaning across semantic domains and whether listeners use these cues to derive word meaning from novel words. Speakers were asked to produce phrases in infant-directed speech in which novel words were used to convey one of two meanings from a set of antonym pairs (e.g., big/small). Acoustic analyses revealed that some acoustic features were correlated with overall valence of the meaning. However, each word meaning also displayed a unique acoustic signature, and semantically related meanings elicited similar acoustic profiles. In two perceptual tests, listeners either attempted to identify the novel words with a matching meaning dimension (picture pair) or with mismatched meaning dimensions. Listeners inferred the meaning of the novel words significantly more often when prosody matched the word meaning choices than when prosody mismatched. These findings suggest that speech contains reliable prosodic markers to word meaning and that listeners use these prosodic cues to differentiate meanings. That prosody is semantic suggests a reconceptualization of traditional distinctions between linguistic and nonlinguistic properties of spoken language. Copyright © 2009 Cognitive Science Society, Inc.
Pantazatos, Spiro P.; Li, Jianrong; Pavlidis, Paul; Lussier, Yves A.
2009-01-01
An approach towards heterogeneous neuroscience dataset integration is proposed that uses Natural Language Processing (NLP) and a knowledge-based phenotype organizer system (PhenOS) to link ontology-anchored terms to underlying data from each database, and then maps these terms based on a computable model of disease (SNOMED CT®). The approach was implemented using sample datasets from fMRIDC, GEO, The Whole Brain Atlas and Neuronames, and allowed for complex queries such as “List all disorders with a finding site of brain region X, and then find the semantically related references in all participating databases based on the ontological model of the disease or its anatomical and morphological attributes”. Precision of the NLP-derived coding of the unstructured phenotypes in each dataset was 88% (n = 50), and precision of the semantic mapping between these terms across datasets was 98% (n = 100). To our knowledge, this is the first example of the use of both semantic decomposition of disease relationships and hierarchical information found in ontologies to integrate heterogeneous phenotypes across clinical and molecular datasets. PMID:20495688
MESUR: USAGE-BASED METRICS OF SCHOLARLY IMPACT
DOE Office of Scientific and Technical Information (OSTI.GOV)
BOLLEN, JOHAN; RODRIGUEZ, MARKO A.; VAN DE SOMPEL, HERBERT
2007-01-30
The evaluation of scholarly communication items is now largely a matter of expert opinion or metrics derived from citation data. Both approaches can fail to take into account the myriad of factors that shape scholarly impact. Usage data has emerged as a promising complement to existing methods o fassessment but the formal groundwork to reliably and validly apply usage-based metrics of schlolarly impact is lacking. The Andrew W. Mellon Foundation funded MESUR project constitutes a systematic effort to define, validate and cross-validate a range of usage-based metrics of schlolarly impact by creating a semantic model of the scholarly communication process.more » The constructed model will serve as the basis of a creating a large-scale semantic network that seamlessly relates citation, bibliographic and usage data from a variety of sources. A subsequent program that uses the established semantic network as a reference data set will determine the characteristics and semantics of a variety of usage-based metrics of schlolarly impact. This paper outlines the architecture and methodology adopted by the MESUR project and its future direction.« less
Wang, Hsueh-Cheng; Hsu, Li-Chuan; Tien, Yi-Min; Pomplun, Marc
2013-01-01
The morphological constituents of English compounds (e.g., “butter” and “fly” for “butterfly”) and two-character Chinese compounds may differ in meaning from the whole word. Subjective differences and ambiguity of transparency make the judgments difficult, and a computational alternative based on a general model may be a way to average across subjective differences. The current study proposes two approaches based on Latent Semantic Analysis (Landauer & Dumais, 1997): Model 1 compares the semantic similarity between a compound word and each of its constituents, and Model 2 derives the dominant meaning of a constituent based on a clustering analysis of morphological family members (e.g., “butterfingers” or “buttermilk” for “butter”). The proposed models successfully predicted participants’ transparency ratings, and we recommend that experimenters use Model 1 for English compounds and Model 2 for Chinese compounds, due to raters’ morphological processing in different writing systems. The dominance of lexical meaning, semantic transparency, and the average similarity between all pairs within a morphological family are provided, and practical applications for future studies are discussed. PMID:23784009
Lexical Semantics and Irregular Inflection
Huang, Yi Ting; Pinker, Steven
2010-01-01
Whether a word has an irregular inflection does not depend on its sound alone: compare lie-lay (recline) and lie-lied (prevaricate). Theories of morphology, particularly connectionist and symbolic models, disagree on which nonphonological factors are responsible. We test four possibilities: (1) Lexical effects, in which two lemmas differ in whether they specify an irregular form; (2) Semantic effects, in which the semantic features of a word become associated with regular or irregular forms; (3) Morphological structure effects, in which a word with a headless structure (e.g., a verb derived from a noun) blocks access to a stored irregular form; (4) Compositionality effects, in which the stored combination of an irregular word’s meaning (e.g., the verb’s inherent aspect) with the meaning of the inflection (e.g., pastness) doesn’t readily transfer to new senses with different combinations of such meanings. In four experiments, speakers were presented with existing and novel verbs and asked to rate their past-tense forms, semantic similarities, grammatical structure, and aspectual similarities. We found (1) an interaction between semantic and phonological similarity, coinciding with reported strategies of analogizing to known verbs and implicating lexical effects; (2) weak and inconsistent effects of semantic similarity; (3) robust effects of morphological structure, and (4) robust effects of aspectual compositionality. Results are consistent with theories of language that invoke lexical entries and morphological structure, and which differentiate the mode of storage of regular and irregular verbs. They also suggest how psycholinguistic processes have shaped vocabulary structure over history. PMID:21151703
Commer, Michael; Doetsch, Joseph; Dafflon, Baptiste; ...
2016-06-01
In this study, we advance the understanding of three-dimensional (3-D) electrical resistivity tomography (ERT) for monitoring long-term CO 2 storage by analyzing two previously published field time-lapse data sets. We address two important aspects of ERT inversion-the issue of resolution decay, a general impediment to the ERT method, and the issue of potentially misleading imaging artifacts due to 2-D model assumptions. The first study analyzes data from a shallow dissolved-CO 2 injection experiment near Escatawpa (Mississippi), where ERT data were collected in a 3-D crosswell configuration. Here, we apply a focusing approach designed for crosswell configurations to counteract resolution lossmore » in the inter-wellbore area, with synthetic studies demonstrating its effectiveness. The 3-D field data analysis reveals an initially southwards-trending flow path development and a dispersing plume development in the downgradient inter-well region. The second data set was collected during a deep (over 3 km) injection of supercritical CO 2 near Cranfield (Mississippi). Comparative 2-D and 3-D inversions reveal the projection of off-planar anomalies onto the cross-section, a typical artifact introduced by 2-D model assumptions. Conforming 3-D images from two different algorithms support earlier hydrological investigations, indicating a conduit system where flow velocity variations lead to a circumvention of a close observation well and an onset of increased CO 2 saturation downgradient from this well. We relate lateral permeability variations indicated by an independently obtained hydrological analysis to this consistently observed pattern in the CO 2 spatial plume evolution.« less
ERIC Educational Resources Information Center
Redouane, Rabia
2007-01-01
This study investigates L2 learners' use of French derivational processes and their strategies as they form agent nouns. It also attempts to find out which of the acquisitional principles (conventionality, semantic transparency, formal simplicity, and productivity) advanced by Clark (1993, 2003) for various L1s acquisition of word formation…
Measuring behavior in mice with chronic stress depression paradigm.
Strekalova, Tatyana; Steinbusch, Harry W M
2010-03-17
Many studies with chronic stress, a common depression paradigm, lead to inconsistent behavioral results. We are introducing a new model of stress-induced anhedonia, which provides more reproducible induction and behavioral measuring of depressive-like phenotype in mice. First, a 4-week stress procedure induces anhedonia, defined by decreased sucrose preference, in the majority of but not all C57BL/6 mice. The remaining 30-50% non-anhedonic animals are used as an internal control for stress effects that are unrelated to anhedonia. Next, a modified sucrose test enables the detection of inter-individual differences in mice. Moreover, testing under dimmed lighting precludes behavioral artifacts caused by hyperlocomotion, a major confounding factor in stressed mice. Finally, moderation of the stress load increases the reproducibility of anhedonia induction, which otherwise is difficult to provide because of inter-batch variability in laboratory mice. We believe that our new mouse model overcomes some major difficulties in measuring behavior with chronic stress depression models. Copyright 2009 Elsevier Inc. All rights reserved.
Hurren, A; Hildreth, A J; Carding, P N
2009-12-01
To investigate the inter and intra reliability of raters (in relation to both profession and expertise) when judging two alaryngeal voice parameters: 'Overall Grade' and 'Neoglottal Tonicity'. Reliable perceptual assessment is essential for surgical and therapeutic outcome measurement but has been minimally researched to date. Test of inter and intra rater agreement from audio recordings of 55 tracheoesophageal speakers. Cancer Unit. Twelve speech and language therapists and ten Ear, Nose and Throat surgeons. Perceptual voice parameters of 'Overall Grade' rated with a 0-3 equally appearing interval scale and 'Neoglottal Tonicity' with an 11-point bipolar semantic scale. All raters achieved 'good' agreement for 'Overall Grade' with mean weighted kappa coefficients of 0.78 for intra and 0.70 for inter-rater agreement. All raters achieved 'good' intra-rater agreement for 'Neoglottal Tonicity' (0.64) but inter-rater agreement was only 'moderate' (0.40). However, the expert speech and language therapists sub-group attained 'good' inter-rater agreement with this parameter (0.63). The effect of 'Neoglottal Tonicity' on 'Overall Grade' was examined utilising only expert speech and language therapists data. Linear regression analysis resulted in an r-squared coefficient of 0.67. Analysis of the perceptual impression of hypotonicity and hypertonicity in relation to mean 'Overall Grade' score demonstrated neither tone was linked to a more favourable grade (P = 0.42). Expert speech and language therapist raters may be the optimal judges for tracheoesophageal voice assessment. Tonicity appears to be a good predictor of 'Overall Grade'. These scales have clinical applicability to investigate techniques that facilitate optotonic neoglottal voice quality.
Reliability of fMRI for Studies of Language in Post-Stroke Aphasia Subjects
Eaton, Kenneth P.; Szaflarski, Jerzy P.; Altaye, Mekibib; Ball, Angel L.; Kissela, Brett M.; Banks, Christi; Holland, Scott K.
2008-01-01
Quantifying change in brain activation patterns associated with post-stroke recovery and reorganization of language function over time requires accurate understanding of inter-scan and inter-subject variability. Here we report inter-scan variability measures for fMRI activation patterns associated with verb generation (VG) and semantic decision/tone decision (SDTD) tasks in 4 healthy controls and 4 aphasic left middle cerebral artery (LMCA) stroke subjects. A series of 10 fMRI scans was completed on a 4T Varian scanner for each task for each subject, except for one stroke subject who completed 5 and 6 scans for SDTD and VG, thus yielding 35 and 36 total stroke subject scans for SDTD and VG, respectively. Group composite and intraclass correlation coefficient (ICC) maps were computed across all subjects and trials for each task. The patterns of reliable activation for the VG and SDTD tasks correspond well to those regions typically activated by these tasks in healthy and aphasic subjects. ICCs for activation were consistently high (R0.05 ≈ 0.8) for individual tasks among both control and aphasic subjects. These voxel-wise measures of reliability highlight regions of low inter-scan variability within language circuitry for control and post-recovery stroke subjects. ICCs computed from the combination of the SDTD/VG data were markedly reduced for both control and aphasic subjects as compared with the ICCs for the individual tasks. These quantitative measures of inter-scan variability support the proposed use of these fMRI paradigms for longitudinal mapping of neural reorganization of language processing following left hemispheric insult. PMID:18411061
Morphological processing with deficient phonological short-term memory.
Kavé, Gitit; Ze'ev, Hagit Bar; Lev, Anita
2007-07-01
This paper investigates the processing of Hebrew derivational morphology in an individual (S.E.) with deficient phonological short-term memory. In comparison to 10 age- and education-matched men, S.E. was impaired on digit span tasks and demonstrated no recency effect in word list recall. S.E. had low word retention span, but he exhibited phonological similarity and word length effects. His ability to make lexical decisions was intact. In a paired-associate test S.E. successfully learned semantically and morphologically related pairs but not phonologically related pairs, and his learning of nonwords was facilitated by the presence of Hebrew consonant roots. Semantic and morphological similarity enhanced immediate word recall. Results show that S.E. is capable of conducting morphological decomposition of Hebrew-derived words despite his phonological deficit, suggesting that transient maintenance of morphological constituents is independent of temporary storage and rehearsal of phonological codes, and that each is processed separately within short-term memory.
NASA Astrophysics Data System (ADS)
Martin, E. R.; Dou, S.; Lindsey, N.; Chang, J. P.; Biondi, B. C.; Ajo Franklin, J. B.; Wagner, A. M.; Bjella, K.; Daley, T. M.; Freifeld, B. M.; Robertson, M.; Ulrich, C.; Williams, E. F.
2016-12-01
Localized strong sources of noise in an array have been shown to cause artifacts in Green's function estimates obtained via cross-correlation. Their effect is often reduced through the use of cross-coherence. Beyond independent localized sources, temporally or spatially correlated sources of noise frequently occur in practice but violate basic assumptions of much of the theory behind ambient noise Green's function retrieval. These correlated noise sources can occur in urban environments due to transportation infrastructure, or in areas around industrial operations like pumps running at CO2 sequestration sites or oil and gas drilling sites. Better understanding of these artifacts should help us develop and justify methods for their automatic removal from Green's function estimates. We derive expected artifacts in cross-correlations from several distributions of correlated noise sources including point sources that are exact time-lagged repeats of each other and Gaussian-distributed in space and time with covariance that exponentially decays. Assuming the noise distribution stays stationary over time, the artifacts become more coherent as more ambient noise is included in the Green's function estimates. We support our results with simple computational models. We observed these artifacts in Green's function estimates from a 2015 ambient noise study in Fairbanks, AK where a trenched distributed acoustic sensing (DAS) array was deployed to collect ambient noise alongside a road with the goal of developing a permafrost thaw monitoring system. We found that joints in the road repeatedly being hit by cars travelling at roughly the speed limit led to artifacts similar to those expected when several points are time-lagged copies of each other. We also show test results of attenuating the effects of these sources during time-lapse monitoring of an active thaw test in the same location with noise detected by a 2D trenched DAS array.
CHAMPION: Intelligent Hierarchical Reasoning Agents for Enhanced Decision Support
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hohimer, Ryan E.; Greitzer, Frank L.; Noonan, Christine F.
2011-11-15
We describe the design and development of an advanced reasoning framework employing semantic technologies, organized within a hierarchy of computational reasoning agents that interpret domain specific information. Designed based on an inspirational metaphor of the pattern recognition functions performed by the human neocortex, the CHAMPION reasoning framework represents a new computational modeling approach that derives invariant knowledge representations through memory-prediction belief propagation processes that are driven by formal ontological language specification and semantic technologies. The CHAMPION framework shows promise for enhancing complex decision making in diverse problem domains including cyber security, nonproliferation and energy consumption analysis.
Utilizing semantic networks to database and retrieve generalized stochastic colored Petri nets
NASA Technical Reports Server (NTRS)
Farah, Jeffrey J.; Kelley, Robert B.
1992-01-01
Previous work has introduced the Planning Coordinator (PCOORD), a coordinator functioning within the hierarchy of the Intelligent Machine Mode. Within the structure of the Planning Coordinator resides the Primitive Structure Database (PSDB) functioning to provide the primitive structures utilized by the Planning Coordinator in the establishing of error recovery or on-line path plans. This report further explores the Primitive Structure Database and establishes the potential of utilizing semantic networks as a means of efficiently storing and retrieving the Generalized Stochastic Colored Petri Nets from which the error recovery plans are derived.
Supervised pixel classification using a feature space derived from an artificial visual system
NASA Technical Reports Server (NTRS)
Baxter, Lisa C.; Coggins, James M.
1991-01-01
Image segmentation involves labelling pixels according to their membership in image regions. This requires the understanding of what a region is. Using supervised pixel classification, the paper investigates how groups of pixels labelled manually according to perceived image semantics map onto the feature space created by an Artificial Visual System. Multiscale structure of regions are investigated and it is shown that pixels form clusters based on their geometric roles in the image intensity function, not by image semantics. A tentative abstract definition of a 'region' is proposed based on this behavior.
Deriving semantic structure from category fluency: clustering techniques and their pitfalls
Voorspoels, Wouter; Storms, Gert; Longenecker, Julia; Verheyen, Steven; Weinberger, Daniel R.; Elvevåg, Brita
2013-01-01
Assessing verbal output in category fluency tasks provides a sensitive indicator of cortical dysfunction. The most common metrics are the overall number of words produced and the number of errors. Two main observations have been made about the structure of the output, first that there is a temporal component to it with words being generated in spurts, and second that the clustering pattern may reflect a search for meanings such that the ‘clustering’ is attributable to the activation of a specific semantic field in memory. A number of sophisticated approaches to examining the structure of this clustering have been developed, and a core theme is that the similarity relations between category members will reveal the mental semantic structure of the category underlying an individual’s responses, which can then be visualized by a number of algorithms, such as MDS, hierarchical clustering, ADDTREE, ADCLUS or SVD. Such approaches have been applied to a variety of neurological and psychiatric populations, and the general conclusion has been that the clinical condition systematically distorts the semantic structure in the patients, as compared to the healthy controls. In the present paper we explore this approach to understanding semantic structure using category fluency data. On the basis of a large pool of patients with schizophrenia (n=204) and healthy control participants (n=204), we find that the methods are problematic and unreliable to the extent that it is not possible to conclude that any putative difference reflects a systematic difference between the semantic representations in patients and controls. Moreover, taking into account the unreliability of the methods, we find that the most probable conclusion to be made is that no difference in underlying semantic representation exists. The consequences of these findings to understanding semantic structure, and the use of category fluency data, in cortical dysfunction are discussed. PMID:24275165
Structure at every scale: A semantic network account of the similarities between unrelated concepts.
De Deyne, Simon; Navarro, Daniel J; Perfors, Amy; Storms, Gert
2016-09-01
Similarity plays an important role in organizing the semantic system. However, given that similarity cannot be defined on purely logical grounds, it is important to understand how people perceive similarities between different entities. Despite this, the vast majority of studies focus on measuring similarity between very closely related items. When considering concepts that are very weakly related, little is known. In this article, we present 4 experiments showing that there are reliable and systematic patterns in how people evaluate the similarities between very dissimilar entities. We present a semantic network account of these similarities showing that a spreading activation mechanism defined over a word association network naturally makes correct predictions about weak similarities, whereas, though simpler, models based on direct neighbors between word pairs derived using the same network cannot. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Entropy is more resistant to artifacts than bispectral index in brain-dead organ donors.
Wennervirta, Johanna; Salmi, Tapani; Hynynen, Markku; Yli-Hankala, Arvi; Koivusalo, Anna-Maria; Van Gils, Mark; Pöyhiä, Reino; Vakkuri, Anne
2007-01-01
To evaluate the usefulness of entropy and the bispectral index (BIS) in brain-dead subjects. A prospective, open, nonselective, observational study in the university hospital. 16 brain-dead organ donors. Time-domain electroencephalography (EEG), spectral entropy of the EEG, and BIS were recorded during solid organ harvest. State entropy differed significantly from 0 (isoelectric EEG) 28%, response entropy 29%, and BIS 68% of the total recorded time. The median values during the operation were state entropy 0.0, response entropy 0.0, and BIS 3.0. In four of 16 organ donors studied the EEG was not isoelectric, and nonreactive rhythmic activity was noted in time-domain EEG. After excluding the results from subjects with persistent residual EEG activity state entropy, response entropy, and BIS values differed from zero 17%, 18%, and 62% of the recorded time, respectively. Median values were 0.0, 0.0, and 2.0 for state entropy, response entropy, and BIS, respectively. The highest index values in entropy and BIS monitoring were recorded without neuromuscular blockade. The main sources of artifacts were electrocauterization, 50-Hz artifact, handling of the donor, ballistocardiography, electromyography, and electrocardiography. Both entropy and BIS showed nonzero values due to artifacts after brain death diagnosis. BIS was more liable to artifacts than entropy. Neither of these indices are diagnostic tools, and care should be taken when interpreting EEG and EEG-derived indices in the evaluation of brain death.
Scalable and expressive medical terminologies.
Mays, E; Weida, R; Dionne, R; Laker, M; White, B; Liang, C; Oles, F J
1996-01-01
The K-Rep system, based on description logic, is used to represent and reason with large and expressive controlled medical terminologies. Expressive concept descriptions incorporate semantically precise definitions composed using logical operators, together with important non-semantic information such as synonyms and codes. Examples are drawn from our experience with K-Rep in modeling the InterMed laboratory terminology and also developing a large clinical terminology now in production use at Kaiser-Permanente. System-level scalability of performance is achieved through an object-oriented database system which efficiently maps persistent memory to virtual memory. Equally important is conceptual scalability-the ability to support collaborative development, organization, and visualization of a substantial terminology as it evolves over time. K-Rep addresses this need by logically completing concept definitions and automatically classifying concepts in a taxonomy via subsumption inferences. The K-Rep system includes a general-purpose GUI environment for terminology development and browsing, a custom interface for formulary term maintenance, a C+2 application program interface, and a distributed client-server mode which provides lightweight clients with efficient run-time access to K-Rep by means of a scripting language.
For a new look at 'lexical errors': evidence from semantic approximations with verbs in aphasia.
Duvignau, Karine; Tran, Thi Mai; Manchon, Mélanie
2013-08-01
The ability to understand the similarity between two phenomena is fundamental for humans. Designated by the term analogy in psychology, this ability plays a role in the categorization of phenomena in the world and in the organisation of the linguistic system. The use of analogy in language often results in non-standard utterances, particularly in speakers with aphasia. These non-standard utterances are almost always studied in a nominal context and considered as errors. We propose a study of the verbal lexicon and present findings that measure, by an action-video naming task, the importance of verb-based non-standard utterances made by 17 speakers with aphasia ("la dame déshabille l'orange"/the lady undresses the orange, "elle casse la tomate"/she breaks the tomato). The first results we have obtained allow us to consider these type of utterances from a new perspective: we propose to eliminate the label of "error", suggesting that they may be viewed as semantic approximations based upon a relationship of inter-domain synonymy and are ingrained in the heart of the lexical system.
Biochemical transformation of lignin for deriving valued commodities from lignocellulose
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gall, Daniel L.; Ralph, John; Donohue, Timothy J.
The biochemical properties of lignin present major obstacles to deriving societally beneficial entities from lignocellulosic biomass, an abundant and renewable feedstock. Similar to other biopolymers such as polysaccharides, polypeptides, and ribonucleic acids, lignin polymers are derived from multiple types of monomeric units. However, lignin’s renowned recalcitrance is largely attributable to its racemic nature and the variety of covalent inter-unit linkages through which its aromatic monomers are linked. Indeed, unlike other biopolymers whose monomers are consistently inter-linked by a single type of covalent bond, the monomeric units in lignin are linked via non-enzymatic, combinatorial radical coupling reactions that give rise tomore » a variety of inter-unit covalent bonds in mildly branched racemic polymers. Yet, despite the chemical complexity and stability of lignin, significant strides have been made in recent years to identify routes through which valued commodities can be derived from it. This paper discusses emerging biological and biochemical means through which degradation of lignin to aromatic monomers can lead to the derivation of commercially valuable products.« less
Biochemical transformation of lignin for deriving valued commodities from lignocellulose
Gall, Daniel L.; Ralph, John; Donohue, Timothy J.; ...
2017-03-24
The biochemical properties of lignin present major obstacles to deriving societally beneficial entities from lignocellulosic biomass, an abundant and renewable feedstock. Similar to other biopolymers such as polysaccharides, polypeptides, and ribonucleic acids, lignin polymers are derived from multiple types of monomeric units. However, lignin’s renowned recalcitrance is largely attributable to its racemic nature and the variety of covalent inter-unit linkages through which its aromatic monomers are linked. Indeed, unlike other biopolymers whose monomers are consistently inter-linked by a single type of covalent bond, the monomeric units in lignin are linked via non-enzymatic, combinatorial radical coupling reactions that give rise tomore » a variety of inter-unit covalent bonds in mildly branched racemic polymers. Yet, despite the chemical complexity and stability of lignin, significant strides have been made in recent years to identify routes through which valued commodities can be derived from it. This paper discusses emerging biological and biochemical means through which degradation of lignin to aromatic monomers can lead to the derivation of commercially valuable products.« less
ERIC Educational Resources Information Center
Harmon, Ronald M.
1994-01-01
Examines the process through which modern Portuguese borrows from other languages, mainly French and English. Portuguese adapts these derivatives to conform to its own rules of phonology, morphology, and semantics. (four references) (Author/CK)
Methods for calculating the electrode position Jacobian for impedance imaging.
Boyle, A; Crabb, M G; Jehl, M; Lionheart, W R B; Adler, A
2017-03-01
Electrical impedance tomography (EIT) or electrical resistivity tomography (ERT) current and measure voltages at the boundary of a domain through electrodes. The movement or incorrect placement of electrodes may lead to modelling errors that result in significant reconstructed image artifacts. These errors may be accounted for by allowing for electrode position estimates in the model. Movement may be reconstructed through a first-order approximation, the electrode position Jacobian. A reconstruction that incorporates electrode position estimates and conductivity can significantly reduce image artifacts. Conversely, if electrode position is ignored it can be difficult to distinguish true conductivity changes from reconstruction artifacts which may increase the risk of a flawed interpretation. In this work, we aim to determine the fastest, most accurate approach for estimating the electrode position Jacobian. Four methods of calculating the electrode position Jacobian were evaluated on a homogeneous halfspace. Results show that Fréchet derivative and rank-one update methods are competitive in computational efficiency but achieve different solutions for certain values of contact impedance and mesh density.
Reynoso, Exequiel; Capunay, Carlos; Rasumoff, Alejandro; Vallejos, Javier; Carpio, Jimena; Lago, Karen; Carrascosa, Patricia
2016-01-01
The aim of this study was to explore the usefulness of combined virtual monochromatic imaging and metal artifact reduction software (MARS) for the evaluation of musculoskeletal periprosthetic tissue. Measurements were performed in periprosthetic and remote regions in 80 patients using a high-definition scanner. Polychromatic images with and without MARS and virtual monochromatic images were obtained. Periprosthetic polychromatic imaging (PI) showed significant differences compared with remote areas among the 3 tissues explored (P < 0.0001). No significant differences were observed between periprosthetic and remote tissues using monochromatic imaging with MARS (P = 0.053 bone, P = 0.32 soft tissue, and P = 0.13 fat). However, such differences were significant using PI with MARS among bone (P = 0.005) and fat (P = 0.02) tissues. All periprosthetic areas were noninterpretable using PI, compared with 11 (9%) using monochromatic imaging. The combined use of virtual monochromatic imaging and MARS reduced periprosthetic artifacts, achieving attenuation levels comparable to implant-free tissue.
Howard, Jeffrey L; Olszewska, Dorota
2011-03-01
An urban soil chronosequence in downtown Detroit, MI was studied to determine the effects of time on pedogenesis and heavy metal sequestration. The soils developed in fill derived from mixed sandy and clayey diamicton parent materials on a level late Pleistocene lakebed plain under grass vegetation in a humid-temperate (mesic) climate. The chronosequence is comprised of soils in vacant lots (12 and 44 years old) and parks (96 and 120 years old), all located within 100 m of a roadway. An A-horizon 16 cm thick with 2% organic matter has developed after only 12 years of pedogenesis. The 12 year-old soil shows accelerated weathering of iron (e.g. nails) and cement artifacts attributed to corrosion by excess soluble salts of uncertain origin. Carbonate and Fe-oxide are immobilizing agents for heavy metals, hence it is recommended that drywall, plaster, cement and iron artifacts be left in soils at brownfield sites for their ameliorating effects. Copyright © 2010 Elsevier Ltd. All rights reserved.
The emotion potential of words and passages in reading Harry Potter--an fMRI study.
Hsu, Chun-Ting; Jacobs, Arthur M; Citron, Francesca M M; Conrad, Markus
2015-03-01
Previous studies suggested that the emotional connotation of single words automatically recruits attention. We investigated the potential of words to induce emotional engagement when reading texts. In an fMRI experiment, we presented 120 text passages from the Harry Potter book series. Results showed significant correlations between affective word (lexical) ratings and passage ratings. Furthermore, affective lexical ratings correlated with activity in regions associated with emotion, situation model building, multi-modal semantic integration, and Theory of Mind. We distinguished differential influences of affective lexical, inter-lexical, and supra-lexical variables: differential effects of lexical valence were significant in the left amygdala, while effects of arousal-span (the dynamic range of arousal across a passage) were significant in the left amygdala and insula. However, we found no differential effect of passage ratings in emotion-associated regions. Our results support the hypothesis that the emotion potential of short texts can be predicted by lexical and inter-lexical affective variables. Copyright © 2015 Elsevier Inc. All rights reserved.
The agent-based spatial information semantic grid
NASA Astrophysics Data System (ADS)
Cui, Wei; Zhu, YaQiong; Zhou, Yong; Li, Deren
2006-10-01
Analyzing the characteristic of multi-Agent and geographic Ontology, The concept of the Agent-based Spatial Information Semantic Grid (ASISG) is defined and the architecture of the ASISG is advanced. ASISG is composed with Multi-Agents and geographic Ontology. The Multi-Agent Systems are composed with User Agents, General Ontology Agent, Geo-Agents, Broker Agents, Resource Agents, Spatial Data Analysis Agents, Spatial Data Access Agents, Task Execution Agent and Monitor Agent. The architecture of ASISG have three layers, they are the fabric layer, the grid management layer and the application layer. The fabric layer what is composed with Data Access Agent, Resource Agent and Geo-Agent encapsulates the data of spatial information system so that exhibits a conceptual interface for the Grid management layer. The Grid management layer, which is composed with General Ontology Agent, Task Execution Agent and Monitor Agent and Data Analysis Agent, used a hybrid method to manage all resources that were registered in a General Ontology Agent that is described by a General Ontology System. The hybrid method is assembled by resource dissemination and resource discovery. The resource dissemination push resource from Local Ontology Agent to General Ontology Agent and the resource discovery pull resource from the General Ontology Agent to Local Ontology Agents. The Local Ontology Agent is derived from special domain and describes the semantic information of local GIS. The nature of the Local Ontology Agents can be filtrated to construct a virtual organization what could provides a global scheme. The virtual organization lightens the burdens of guests because they need not search information site by site manually. The application layer what is composed with User Agent, Geo-Agent and Task Execution Agent can apply a corresponding interface to a domain user. The functions that ASISG should provide are: 1) It integrates different spatial information systems on the semantic The Grid management layer establishes a virtual environment that integrates seamlessly all GIS notes. 2) When the resource management system searches data on different spatial information systems, it transfers the meaning of different Local Ontology Agents rather than access data directly. So the ability of search and query can be said to be on the semantic level. 3) The data access procedure is transparent to guests, that is, they could access the information from remote site as current disk because the General Ontology Agent could automatically link data by the Data Agents that link the Ontology concept to GIS data. 4) The capability of processing massive spatial data. Storing, accessing and managing massive spatial data from TB to PB; efficiently analyzing and processing spatial data to produce model, information and knowledge; and providing 3D and multimedia visualization services. 5) The capability of high performance computing and processing on spatial information. Solving spatial problems with high precision, high quality, and on a large scale; and process spatial information in real time or on time, with high-speed and high efficiency. 6) The capability of sharing spatial resources. The distributed heterogeneous spatial information resources are Shared and realizing integrated and inter-operated on semantic level, so as to make best use of spatial information resources,such as computing resources, storage devices, spatial data (integrating from GIS, RS and GPS), spatial applications and services, GIS platforms, 7) The capability of integrating legacy GIS system. A ASISG can not only be used to construct new advanced spatial application systems, but also integrate legacy GIS system, so as to keep extensibility and inheritance and guarantee investment of users. 8) The capability of collaboration. Large-scale spatial information applications and services always involve different departments in different geographic places, so remote and uniform services are needed. 9) The capability of supporting integration of heterogeneous systems. Large-scale spatial information systems are always synthetically applications, so ASISG should provide interoperation and consistency through adopting open and applied technology standards. 10) The capability of adapting dynamic changes. Business requirements, application patterns, management strategies, and IT products always change endlessly for any departments, so ASISG should be self-adaptive. Two examples are provided in this paper, those examples provide a detailed way on how you design your semantic grid based on Multi-Agent systems and Ontology. In conclusion, the semantic grid of spatial information system could improve the ability of the integration and interoperability of spatial information grid.
Semantic integration to identify overlapping functional modules in protein interaction networks
Cho, Young-Rae; Hwang, Woochang; Ramanathan, Murali; Zhang, Aidong
2007-01-01
Background The systematic analysis of protein-protein interactions can enable a better understanding of cellular organization, processes and functions. Functional modules can be identified from the protein interaction networks derived from experimental data sets. However, these analyses are challenging because of the presence of unreliable interactions and the complex connectivity of the network. The integration of protein-protein interactions with the data from other sources can be leveraged for improving the effectiveness of functional module detection algorithms. Results We have developed novel metrics, called semantic similarity and semantic interactivity, which use Gene Ontology (GO) annotations to measure the reliability of protein-protein interactions. The protein interaction networks can be converted into a weighted graph representation by assigning the reliability values to each interaction as a weight. We presented a flow-based modularization algorithm to efficiently identify overlapping modules in the weighted interaction networks. The experimental results show that the semantic similarity and semantic interactivity of interacting pairs were positively correlated with functional co-occurrence. The effectiveness of the algorithm for identifying modules was evaluated using functional categories from the MIPS database. We demonstrated that our algorithm had higher accuracy compared to other competing approaches. Conclusion The integration of protein interaction networks with GO annotation data and the capability of detecting overlapping modules substantially improve the accuracy of module identification. PMID:17650343
Deutsch, Avital
2016-02-01
In the present study we investigated to what extent the morphological facilitation effect induced by the derivational root morpheme in Hebrew is independent of semantic meaning and grammatical information of the part of speech involved. Using the picture-word interference paradigm with auditorily presented distractors, Experiment 1 compared the facilitation effect induced by semantically transparent versus semantically opaque morphologically related distractor words (i.e., a shared root) on the production latency of bare nouns. The results revealed almost the same amount of facilitation for both relatedness conditions. These findings accord with the results of the few studies that have addressed this issue in production in Indo-European languages, as well as previous studies in written word perception in Hebrew. Experiment 2 compared the root's facilitation effect, induced by morphologically related nominal versus verbal distractors, on the production latency of bare nouns. The results revealed a facilitation effect of similar size induced by the shared root, regardless of the distractor's part of speech. It is suggested that the principle that governs lexical organization at the level of morphology, at least for Hebrew roots, is form-driven and independent of semantic meaning. This principle of organization crosses the linguistic domains of production and written word perception, as well as grammatical organization according to part of speech.
Model of Image Artifacts from Dust Particles
NASA Technical Reports Server (NTRS)
Willson, Reg
2008-01-01
A mathematical model of image artifacts produced by dust particles on lenses has been derived. Machine-vision systems often have to work with camera lenses that become dusty during use. Dust particles on the front surface of a lens produce image artifacts that can potentially affect the performance of a machine-vision algorithm. The present model satisfies a need for a means of synthesizing dust image artifacts for testing machine-vision algorithms for robustness (or the lack thereof) in the presence of dust on lenses. A dust particle can absorb light or scatter light out of some pixels, thereby giving rise to a dark dust artifact. It can also scatter light into other pixels, thereby giving rise to a bright dust artifact. For the sake of simplicity, this model deals only with dark dust artifacts. The model effectively represents dark dust artifacts as an attenuation image consisting of an array of diffuse darkened spots centered at image locations corresponding to the locations of dust particles. The dust artifacts are computationally incorporated into a given test image by simply multiplying the brightness value of each pixel by a transmission factor that incorporates the factor of attenuation, by dust particles, of the light incident on that pixel. With respect to computation of the attenuation and transmission factors, the model is based on a first-order geometric (ray)-optics treatment of the shadows cast by dust particles on the image detector. In this model, the light collected by a pixel is deemed to be confined to a pair of cones defined by the location of the pixel s image in object space, the entrance pupil of the lens, and the location of the pixel in the image plane (see Figure 1). For simplicity, it is assumed that the size of a dust particle is somewhat less than the diameter, at the front surface of the lens, of any collection cone containing all or part of that dust particle. Under this assumption, the shape of any individual dust particle artifact is the shape (typically, circular) of the aperture, and the contribution of the particle to the attenuation factor for a given pixel is the fraction of the cross-sectional area of the collection cone occupied by the particle. Assuming that dust particles do not overlap, the net transmission factor for a given pixel is calculated as one minus the sum of attenuation factors contributed by all dust particles affecting that pixel. In a test, the model was used to synthesize attenuation images for random distributions of dust particles on the front surface of a lens at various relative aperture (F-number) settings. As shown in Figure 2, the attenuation images resembled dust artifacts in real test images recorded while the lens was aimed at a white target.
Gold, Carl A; Marchant, Natalie L; Koutstaal, Wilma; Schacter, Daniel L; Budson, Andrew E
2007-09-20
The presence or absence of conceptual information in pictorial stimuli may explain the mixed findings of previous studies of false recognition in patients with mild Alzheimer's disease (AD). To test this hypothesis, 48 patients with AD were compared to 48 healthy older adults on a recognition task first described by Koutstaal et al. [Koutstaal, W., Reddy, C., Jackson, E. M., Prince, S., Cendan, D. L., & Schacter D. L. (2003). False recognition of abstract versus common objects in older and younger adults: Testing the semantic categorization account. Journal of Experimental Psychology: Learning, Memory, and Cognition, 29, 499-510]. Participants studied and were tested on their memory for categorized ambiguous pictures of common objects. The presence of conceptual information at study and/or test was manipulated by providing or withholding disambiguating semantic labels. Analyses focused on testing two competing theories. The semantic encoding hypothesis, which posits that the inter-item perceptual details are not encoded by AD patients when conceptual information is present in the stimuli, was not supported by the findings. In contrast, the conceptual fluency hypothesis was supported. Enhanced conceptual fluency at test dramatically shifted AD patients to a more liberal response bias, raising their false recognition. These results suggest that patients with AD rely on the fluency of test items in making recognition memory decisions. We speculate that AD patients' over reliance upon fluency may be attributable to (1) dysfunction of the hippocampus, disrupting recollection, and/or (2) dysfunction of prefrontal cortex, disrupting post-retrieval processes.
SU30. Long-Term Memory Deficits in Schizophrenia: Are All Things Equal?
Rossell, Susan
2017-01-01
Abstract Background: Kraepelin and Bleulernoted that patients with schizophrenia had significant cognitive deficits over a century ago; however, their observations with regard to long-term memory have not born out within empirical studies. They reported that episodic memory was intact but indicated that organization of memories, or semantic memory, was disordered. This study aimed to synthesize a century of research in the 2 long-term memory processes of episodic and semantic memory across the psychosis continuum: chronic patients, first-episode patients, high risk for psychosis cohorts, and persons with high schizotypy. Methods: A systematic review and meta-analysis was completed within the 2 domains of long-term memory across the psychosis continuum. Search terms included long-term memory, episodic, semantic, and derivations of these terms. The data were synthesized independently for episodic and semantic memory. Four independent populations were investigated: chronic patients, first-episode patients, high risk for psychosis cohorts, and persons with high schizotypy. Our approach followed the PRISMA guidelines. Thus, the pooled mean effect sizes are reported for 8 analyses. These effect sizes represent case cohort in comparison to a healthy control cohort. Results: The results were as follows, for episodic memory: chronic patients d = 1.12, first-episode patients d = 1.12, high risk d = 1.14, and high schizotypy d = 0.13. Thus, establishing that there is poor evidence of episodic memory deficits in persons with high schizotypy. For semantic memory, the literature showed a different pattern: chronic patients d = 1.2, first-episode patients d = 1.08, high risk d = 1.16, and high schizotypy d = 0.95. Thus, a consistent degree of semantic memory deficits across the continuum. Conclusion: The literature suggests a dissociated pattern of long-term memory deficits; whereby semantic memory abnormalities are more likely to be considered endophenotypes or cognitive markers for schizophrenia than episodic memory deficits. Differential patterns of semantic memory organization are argued to be present prior to the onset of the disorder. There is additional evidence to suggest that idiosyncratic storage of semantic material underlies the development of the usual beliefs and speech patterns present in the forms of delusions and formal thought disorder. Consequently, semantic memory might be a useful target for cognitive remediation.
Optimization-Based Image Reconstruction with Artifact Reduction in C-Arm CBCT
Xia, Dan; Langan, David A.; Solomon, Stephen B.; Zhang, Zheng; Chen, Buxin; Lai, Hao; Sidky, Emil Y.; Pan, Xiaochuan
2016-01-01
We investigate an optimization-based reconstruction, with an emphasis on image-artifact reduction, from data collected in C-arm cone-beam computed tomography (CBCT) employed in image-guided interventional procedures. In the study, an image to be reconstructed is formulated as a solution to a convex optimization program in which a weighted data divergence is minimized subject to a constraint on the image total variation (TV); a data-derivative fidelity is introduced in the program specifically for effectively suppressing dominant, low-frequency data artifact caused by, e.g., data truncation; and the Chambolle-Pock (CP) algorithm is tailored to reconstruct an image through solving the program. Like any other reconstructions, the optimization-based reconstruction considered depends upon numerous parameters. We elucidate the parameters, illustrate their determination, and demonstrate their impact on the reconstruction. The optimization-based reconstruction, when applied to data collected from swine and patient subjects, yields images with visibly reduced artifacts in contrast to the reference reconstruction, and it also appears to exhibit a high degree of robustness against distinctively different anatomies of imaged subjects and scanning conditions of clinical significance. Knowledge and insights gained in the study may be exploited for aiding in the design of practical reconstructions of truly clinical-application utility. PMID:27694700
Optimization-based image reconstruction with artifact reduction in C-arm CBCT
NASA Astrophysics Data System (ADS)
Xia, Dan; Langan, David A.; Solomon, Stephen B.; Zhang, Zheng; Chen, Buxin; Lai, Hao; Sidky, Emil Y.; Pan, Xiaochuan
2016-10-01
We investigate an optimization-based reconstruction, with an emphasis on image-artifact reduction, from data collected in C-arm cone-beam computed tomography (CBCT) employed in image-guided interventional procedures. In the study, an image to be reconstructed is formulated as a solution to a convex optimization program in which a weighted data divergence is minimized subject to a constraint on the image total variation (TV); a data-derivative fidelity is introduced in the program specifically for effectively suppressing dominant, low-frequency data artifact caused by, e.g. data truncation; and the Chambolle-Pock (CP) algorithm is tailored to reconstruct an image through solving the program. Like any other reconstructions, the optimization-based reconstruction considered depends upon numerous parameters. We elucidate the parameters, illustrate their determination, and demonstrate their impact on the reconstruction. The optimization-based reconstruction, when applied to data collected from swine and patient subjects, yields images with visibly reduced artifacts in contrast to the reference reconstruction, and it also appears to exhibit a high degree of robustness against distinctively different anatomies of imaged subjects and scanning conditions of clinical significance. Knowledge and insights gained in the study may be exploited for aiding in the design of practical reconstructions of truly clinical-application utility.
Varga, Nicole L.; Stewart, Rebekah A.; Bauer, Patricia J.
2016-01-01
Semantic memory, defined as our store of knowledge about the world, provides representational support for all of our higher order cognitive functions. As such, it is crucial that the contents of semantic memory remain accessible over time. Although memory for knowledge learned through direct observation has been previously investigated, we know very little about the retention of knowledge derived through integration of information acquired across separate learning episodes. The present research investigated cross-episode integration in 4-year-old children. Participants were presented with novel facts via distinct story episodes and tested for knowledge extension through cross-episode integration, as well as for retention of the information over a 1-week delay. In Experiment 1, children retained the self-derived knowledge over the delay, though performance was primarily evidenced in a forced-choice format. In Experiment 2, we sought to facilitate the accessibility and robustness of self-derived knowledge by providing a verbal reminder after the delay. The accessibility of self-derived knowledge increased, irrespective of whether participants successfully demonstrated knowledge of the integration facts during the first visit. The results suggest knowledge extended through integration remains accessible after delays, even in a population in which this learning process is less robust. The findings also demonstrate the facilitative effect of reminders on the accessibility and further extension of knowledge over extended time periods. PMID:26774259
Treatment of category generation and retrieval in aphasia: Effect of typicality of category items.
Kiran, Swathi; Sandberg, Chaleece; Sebastian, Rajani
2011-01-01
Purpose: Kiran and colleagues (Kiran, 2007, 2008; Kiran & Johnson, 2008; Kiran & Thompson, 2003) have previously suggested that training atypical examples within a semantic category is a more efficient treatment approach to facilitating generalization within the category than training typical examples. The present study extended our previous work examining the notion of semantic complexity within goal-derived (ad-hoc) categories in individuals with aphasia. Methods: Six individuals with fluent aphasia (range = 39-84 years) and varying degrees of naming deficits and semantic impairments were involved. Thirty typical and atypical items each from two categories were selected after an extensive stimulus norming task. Generative naming for the two categories was tested during baseline and treatment. Results: As predicted, training atypical examples in the category resulted in generalization to untrained typical examples in five out the five patient-treatment conditions. In contrast, training typical examples (which was in examined three conditions) produced mixed results. One patient showed generalization to untrained atypical examples, whereas two patients did not show generalization to untrained atypical examples. Conclusions: Results of the present study supplement our existing data on the effect of a semantically based treatment for lexical retrieval by manipulating the typicality of category exemplars. PMID:21173393
NASA Astrophysics Data System (ADS)
Lewis, Adam D.; Katta, Nitesh; McElroy, Austin; Milner, Thomas; Fish, Scott; Beaman, Joseph
2018-04-01
Optical coherence tomography (OCT) has shown promise as a process sensor in selective laser sintering (SLS) due to its ability to yield depth-resolved data not attainable with conventional sensors. However, OCT images of nylon 12 powder and nylon 12 components fabricated via SLS contain artifacts that have not been previously investigated in the literature. A better understanding of light interactions with SLS powder and components is foundational for further research expanding the utility of OCT imaging in SLS and other additive manufacturing (AM) sensing applications. Specifically, in this work, nylon powder and sintered parts were imaged in air and in an index matching liquid. Subsequent image analysis revealed the cause of "signal-tail" OCT image artifacts to be a combination of both inter and intraparticle multiple-scattering and reflections. Then, the OCT imaging depth of nylon 12 powder and the contrast-to-noise ratio of a sintered part were improved through the use of an index matching liquid. Finally, polymer crystals were identified as the main source of intraparticle scattering in nylon 12 powder. Implications of these results on future research utilizing OCT in SLS are also given.
NASA Astrophysics Data System (ADS)
Reuer, Matthew K.; Boyle, Edward A.; Cole, Julia E.
2003-05-01
The Cariaco Basin is an important archive of past climate variability given its response to inter- and extratropical climate forcing and the accumulation of annually laminated sediments within an anoxic water column. This study presents high-resolution surface coral trace element records ( Montastrea annularis and Siderastrea siderea) from Isla Tortuga, Venezuela, located within the upwelling center of this region. A two-fold reduction in Cd/Ca ratios (3.5-1.7 nmol/mol) is observed from 1946 to 1952 with no concurrent shift in Ba/Ca ratios. This reduction agrees with the hydrographic distribution of dissolved cadmium and barium and their expected response to upwelling. Significant anthropogenic variability is also observed from Pb/Ca analysis, observing three lead maxima since 1920. Kinetic control of trace element ratios is inferred from an interspecies comparison of Cd/Ca and Ba/Ca ratios (consistent with the Sr/Ca kinetic artifact), but these artifacts are smaller than the environmental signal and do not explain the Cd/Ca transition. The trace element records agree with historical climate data and differ from sedimentary faunal abundance records, suggesting a linear response to North Atlantic extratropical forcing cannot account for the observed historical variability in this region.
NASA Astrophysics Data System (ADS)
Li, Jia; Tian, Yonghong; Gao, Wen
2008-01-01
In recent years, the amount of streaming video has grown rapidly on the Web. Often, retrieving these streaming videos offers the challenge of indexing and analyzing the media in real time because the streams must be treated as effectively infinite in length, thus precluding offline processing. Generally speaking, captions are important semantic clues for video indexing and retrieval. However, existing caption detection methods often have difficulties to make real-time detection for streaming video, and few of them concern on the differentiation of captions from scene texts and scrolling texts. In general, these texts have different roles in streaming video retrieval. To overcome these difficulties, this paper proposes a novel approach which explores the inter-frame correlation analysis and wavelet-domain modeling for real-time caption detection in streaming video. In our approach, the inter-frame correlation information is used to distinguish caption texts from scene texts and scrolling texts. Moreover, wavelet-domain Generalized Gaussian Models (GGMs) are utilized to automatically remove non-text regions from each frame and only keep caption regions for further processing. Experiment results show that our approach is able to offer real-time caption detection with high recall and low false alarm rate, and also can effectively discern caption texts from the other texts even in low resolutions.
NASA Astrophysics Data System (ADS)
Roskin, Joel; Taxel, Itamar
2017-04-01
This study reveals an attempt to condition agriculture in coastal aeolian sand holding a high water table. Twenty-six small sites, clustering in topographic lows of the Yavneh dunefield, southern Israeli coastal plain, yield surficial Early Islamic finds, and eroded 1-2 m high berms built of grey sand partially covered by parabolic and transverse dunes. Small winter ponds develop by some of the sites. A clay loam 2.5 m beneath the surface retains the water table at a depth of 2.2 m. Between the berms, a 10-50 cm thick grey sand unit dating by OSL to 0.9 ka (11th-12th century AD) underlays a loose aeolian sand cover and overlays sand whose upper parts date to 1.1 ka (9th-10th century AD). The grey unit displays slightly improved fertility (phosphate, potassium, nitrogen and calcium carbonate) in relation to the underlying sand suggesting an anthropogenic enrichment of ash and refuse. Particle size is similar to the sand. Organic carbon and magnetic susceptibility values (0-5 SI) values are quite low (0.4-0.8%) for both units. The artifact assemblage is mixed and comprised of small (<10 cm) pottery sherds, ceramic roof tiles, glass, marble and granite fragments, mosaic tesserae, pottery production waste, iron slag, animal bones, seashells, and coins dated between the 8th and 10th century. The artifacts pre-date the OSL age of the underlying grey sand. The pottery shares many characteristics with the rich ceramic assemblage of nearby inland Yavneh. The establishment of the sites may have been executed by the inhabitants of either Yavneh (or another major inland settlement) or the seashore Muslim military stronghold of Yavneh-Yam (Taxel, 2013). The density of the sites is remarkable compared with the paucity of Byzantine sites in the same region, indicating a distinct spatial pattern that served a specific purpose. The lack of buried artifacts and structures suggests that the sites did not serve for permanent/intensive occupation. The widespread utilization of the rich assortment of Early Islamic artifacts but the relatively younger OSL ages of the underlying grey sand and absence of older Byzantine pottery suggests that the artifacts were rapidly dispersed upon the surface, probably from an abandoned and possibly partly pedogenized town dump dating to the 8th-10th century. The sites are interpreted to be part of an extensive agroecosystem comprised of berm-bordered agricultural plots in lows that allowed easy manual or root access to the high water table. The sites' character and ages closely resembles the well-preserved crisscross berms and inter-berm depressions south of ancient Caesarea that date to 0.86 ka (Roskin et al., 2015). The agricultural activity probably lasted no more than several decades to one century but its utility remains a question. The study documents a challenging attempt to utilize uncultivated sand sheets in a Mediterranean environment for agroecosystem expansion, income, control and "greening" of the terrain. This effort partly reminisces other Early Islamic agricultural water systems (e.g. qanats) in arid regions. It demonstrates that spatial agroecosystems can be developed in times that are not necessarily characterized by socio-political stability.
McAdam, C John; Hanton, Lyall R; Moratti, Stephen C; Simpson, Jim
2015-12-01
The isomeric derivatives 1,2-bis-(iodo-meth-yl)benzene, (I), and 1,3-bis-(iodo-meth-yl)benzene (II), both C8H8I2, were prepared by metathesis from their di-bromo analogues. The ortho-derivative, (I), lies about a crystallographic twofold axis that bis-ects the C-C bond between the two iodo-methyl substituents. The packing in (I) relies solely on C-H⋯I hydrogen bonds supported by weak parallel slipped π-π stacking inter-actions [inter-centroid distance = 4.0569 (11) Å, inter-planar distance = 3.3789 (8) Å and slippage = 2.245 Å]. While C-H⋯I hydrogen bonds are also found in the packing of (II), type II, I⋯I halogen bonds [I⋯I = 3.8662 (2) Å] and C-H⋯π contacts feature prominently in stabilizing the three-dimensional structure.
Naive Physics, Event Perception, Lexical Semantics, and Language Acquisition
1993-04-01
settings within a framework of universal grammar. His central claim is that children use primarily unembedded material as evidence for the parameter...differentiate embedded from unembedded material. Deriving such structural information requires that the learner determine constituent order prior io ot her
Akama, Hiroyuki; Miyake, Maki; Jung, Jaeyoung; Murphy, Brian
2015-01-01
In this study, we introduce an original distance definition for graphs, called the Markov-inverse-F measure (MiF). This measure enables the integration of classical graph theory indices with new knowledge pertaining to structural feature extraction from semantic networks. MiF improves the conventional Jaccard and/or Simpson indices, and reconciles both the geodesic information (random walk) and co-occurrence adjustment (degree balance and distribution). We measure the effectiveness of graph-based coefficients through the application of linguistic graph information for a neural activity recorded during conceptual processing in the human brain. Specifically, the MiF distance is computed between each of the nouns used in a previous neural experiment and each of the in-between words in a subgraph derived from the Edinburgh Word Association Thesaurus of English. From the MiF-based information matrix, a machine learning model can accurately obtain a scalar parameter that specifies the degree to which each voxel in (the MRI image of) the brain is activated by each word or each principal component of the intermediate semantic features. Furthermore, correlating the voxel information with the MiF-based principal components, a new computational neurolinguistics model with a network connectivity paradigm is created. This allows two dimensions of context space to be incorporated with both semantic and neural distributional representations.
Coane, Jennifer H.; Sánchez-Gutiérrez, Claudia; Stillman, Chelsea M.; Corriveau, Jennifer A.
2014-01-01
Idiomatic expressions can be interpreted literally or figuratively. These two meanings are often processed in parallel or very rapidly, as evidenced by online measures of idiomatic processing. Because in many cases the figurative meaning cannot be derived from the component lexical elements and because of the speed with which this meaning is accessed, it is assumed such meanings are stored in semantic memory. In the present study, we examined how literal equivalents and intact idiomatic expressions are stored in memory and whether episodic memory traces interact or interfere with semantic-level representations and vice versa. To examine age-invariance, younger and older adults studied lists of idioms and literal equivalents. On a recognition test, some studied items were presented in the alternative form (e.g., if the idiom was studied, its literal equivalent was tested). False alarms to these critical items suggested that studying literal equivalents activates the idiom from which they are derived, presumably due to spreading activation in lexical/semantic networks, and results in high rates of errors. Importantly, however, the converse (false alarms to literal equivalents after studying the idiom) were significantly lower, suggesting an advantage in storage for idioms. The results are consistent with idiom processing models that suggest obligatory access to figurative meanings and that this access can also occur indirectly, through literal equivalents. PMID:25101030
Coane, Jennifer H; Sánchez-Gutiérrez, Claudia; Stillman, Chelsea M; Corriveau, Jennifer A
2014-01-01
Idiomatic expressions can be interpreted literally or figuratively. These two meanings are often processed in parallel or very rapidly, as evidenced by online measures of idiomatic processing. Because in many cases the figurative meaning cannot be derived from the component lexical elements and because of the speed with which this meaning is accessed, it is assumed such meanings are stored in semantic memory. In the present study, we examined how literal equivalents and intact idiomatic expressions are stored in memory and whether episodic memory traces interact or interfere with semantic-level representations and vice versa. To examine age-invariance, younger and older adults studied lists of idioms and literal equivalents. On a recognition test, some studied items were presented in the alternative form (e.g., if the idiom was studied, its literal equivalent was tested). False alarms to these critical items suggested that studying literal equivalents activates the idiom from which they are derived, presumably due to spreading activation in lexical/semantic networks, and results in high rates of errors. Importantly, however, the converse (false alarms to literal equivalents after studying the idiom) were significantly lower, suggesting an advantage in storage for idioms. The results are consistent with idiom processing models that suggest obligatory access to figurative meanings and that this access can also occur indirectly, through literal equivalents.
NASA Astrophysics Data System (ADS)
Abu Anas, Emran Mohammad; Kim, Jae Gon; Lee, Soo Yeol; Kamrul Hasan, Md
2011-10-01
The use of an x-ray flat panel detector is increasingly becoming popular in 3D cone beam volume CT machines. Due to the deficient semiconductor array manufacturing process, the cone beam projection data are often corrupted by different types of abnormalities, which cause severe ring and radiant artifacts in a cone beam reconstruction image, and as a result, the diagnostic image quality is degraded. In this paper, a novel technique is presented for the correction of error in the 2D cone beam projections due to abnormalities often observed in 2D x-ray flat panel detectors. Template images are derived from the responses of the detector pixels using their statistical properties and then an effective non-causal derivative-based detection algorithm in 2D space is presented for the detection of defective and mis-calibrated detector elements separately. An image inpainting-based 3D correction scheme is proposed for the estimation of responses of defective detector elements, and the responses of the mis-calibrated detector elements are corrected using the normalization technique. For real-time implementation, a simplification of the proposed off-line method is also suggested. Finally, the proposed algorithms are tested using different real cone beam volume CT images and the experimental results demonstrate that the proposed methods can effectively remove ring and radiant artifacts from cone beam volume CT images compared to other reported techniques in the literature.
Axial Cone-Beam Reconstruction by Weighted BPF/DBPF and Orthogonal Butterfly Filtering.
Tang, Shaojie; Tang, Xiangyang
2016-09-01
The backprojection-filtration (BPF) and the derivative backprojection filtered (DBPF) algorithms, in which Hilbert filtering is the common algorithmic feature, are originally derived for exact helical reconstruction from cone-beam (CB) scan data and axial reconstruction from fan beam data, respectively. These two algorithms can be heuristically extended for image reconstruction from axial CB scan data, but induce severe artifacts in images located away from the central plane, determined by the circular source trajectory. We propose an algorithmic solution herein to eliminate the artifacts. The solution is an integration of three-dimensional (3-D) weighted axial CB-BPF/DBPF algorithm with orthogonal butterfly filtering, namely axial CB-BPF/DBPF cascaded with orthogonal butterfly filtering. Using the computer simulated Forbild head and thoracic phantoms that are rigorous in inspecting the reconstruction accuracy, and an anthropomorphic thoracic phantom with projection data acquired by a CT scanner, we evaluate the performance of the proposed algorithm. Preliminary results show that the orthogonal butterfly filtering can eliminate the severe streak artifacts existing in the images reconstructed by the 3-D weighted axial CB-BPF/DBPF algorithm located at off-central planes. Integrated with orthogonal butterfly filtering, the 3-D weighted CB-BPF/DBPF algorithm can perform at least as well as the 3-D weighted CB-FBP algorithm in image reconstruction from axial CB scan data. The proposed 3-D weighted axial CB-BPF/DBPF cascaded with orthogonal butterfly filtering can be an algorithmic solution for CT imaging in extensive clinical and preclinical applications.
Provenance in Data Interoperability for Multi-Sensor Intercomparison
NASA Technical Reports Server (NTRS)
Lynnes, Chris; Leptoukh, Greg; Berrick, Steve; Shen, Suhung; Prados, Ana; Fox, Peter; Yang, Wenli; Min, Min; Holloway, Dan; Enloe, Yonsook
2008-01-01
As our inventory of Earth science data sets grows, the ability to compare, merge and fuse multiple datasets grows in importance. This requires a deeper data interoperability than we have now. Efforts such as Open Geospatial Consortium and OPeNDAP (Open-source Project for a Network Data Access Protocol) have broken down format barriers to interoperability; the next challenge is the semantic aspects of the data. Consider the issues when satellite data are merged, cross-calibrated, validated, inter-compared and fused. We must match up data sets that are related, yet different in significant ways: the phenomenon being measured, measurement technique, location in space-time or quality of the measurements. If subtle distinctions between similar measurements are not clear to the user, results can be meaningless or lead to an incorrect interpretation of the data. Most of these distinctions trace to how the data came to be: sensors, processing and quality assessment. For example, monthly averages of satellite-based aerosol measurements often show significant discrepancies, which might be due to differences in spatio- temporal aggregation, sampling issues, sensor biases, algorithm differences or calibration issues. Provenance information must be captured in a semantic framework that allows data inter-use tools to incorporate it and aid in the intervention of comparison or merged products. Semantic web technology allows us to encode our knowledge of measurement characteristics, phenomena measured, space-time representation, and data quality attributes in a well-structured, machine-readable ontology and rulesets. An analysis tool can use this knowledge to show users the provenance-related distrintions between two variables, advising on options for further data processing and analysis. An additional problem for workflows distributed across heterogeneous systems is retrieval and transport of provenance. Provenance may be either embedded within the data payload, or transmitted from server to client in an out-of-band mechanism. The out of band mechanism is more flexible in the richness of provenance information that can be accomodated, but it relies on a persistent framework and can be difficult for legacy clients to use. We are prototyping the embedded model, incorporating provenance within metadata objects in the data payload. Thus, it always remains with the data. The downside is a limit to the size of provenance metadata that we can include, an issue that will eventually need resolution to encompass the richness of provenance information required for daata intercomparison and merging.
Lies, K H; Hartung, A; Postulka, A; Gring, H; Schulze, J
1986-01-01
For particulate emissions, standards were established by the US EPA in February 1980. Regulations limiting particulates from new light duty diesel vehicles are valid by model year 1982. The corresponding standards on a pure mass basis do not take into account any chemical character of the diesel particulate matter. Our investigation of the material composition shows that diesel particulates consist mainly of soot (up to 80% by weight) and adsorptively bound organics including polycyclic aromatic hydrocarbons (PAH). The qualitative and quantitative nature of hydrocarbon compounds associated with the particulates is dependent not only on the combustion parameters of the engine but also to an important degree on the sampling conditions when the particulates are collected (dilution ratio, temperature, filter material, sampling time etc.). Various methods for the analyses of PAH and their oxy- and nitro-derivatives are described including sampling, extraction, fractionation and chemical analysis. Quantitative comparison of PAH, nitro-PAH and oxy-PAH from different engines are given. For assessing mutagenicity of particulate matter, short-term biological tests are widely used. These biological tests often need a great amount of particulate matter requiring prolonged filter sampling times. Since it is well known that facile PAH oxidation can take place under the conditions used for sampling and analysis, the question rises if these PAH-derivates found in particle extracts partly or totally are produced during sampling (artifacts). Various results concerning nitro- and oxy-PAH are presented characterizing artifact formation as a minor problem under the conditions of the Federal Test Procedure. But results show that under other sampling conditions, e.g. electrostatic precipitation, higher NO2-concentrations and longer sampling times, artifact formation can become a bigger problem. The more stringent particulate standard of 0.2 g/mi for model years 1986 and 1987 respectively requires particulate trap technology. Preliminary investigations of the efficiency of ceramic filters used reveal that the reduction of the adsorptively bound organics is lower than the decrease of the solid carbonaceous fractions.
Mimvec: a deep learning approach for analyzing the human phenome.
Gan, Mingxin; Li, Wenran; Zeng, Wanwen; Wang, Xiaojian; Jiang, Rui
2017-09-21
The human phenome has been widely used with a variety of genomic data sources in the inference of disease genes. However, most existing methods thus far derive phenotype similarity based on the analysis of biomedical databases by using the traditional term frequency-inverse document frequency (TF-IDF) formulation. This framework, though intuitive, not only ignores semantic relationships between words but also tends to produce high-dimensional vectors, and hence lacks the ability to precisely capture intrinsic semantic characteristics of biomedical documents. To overcome these limitations, we propose a framework called mimvec to analyze the human phenome by making use of the state-of-the-art deep learning technique in natural language processing. We converted 24,061 records in the Online Mendelian Inheritance in Man (OMIM) database to low-dimensional vectors using our method. We demonstrated that the vector presentation not only effectively enabled classification of phenotype records against gene ones, but also succeeded in discriminating diseases of different inheritance styles and different mechanisms. We further derived pairwise phenotype similarities between 7988 human inherited diseases using their vector presentations. With a joint analysis of this phenome with multiple genomic data, we showed that phenotype overlap indeed implied genotype overlap. We finally used the derived phenotype similarities with genomic data to prioritize candidate genes and demonstrated advantages of this method over existing ones. Our method is capable of not only capturing semantic relationships between words in biomedical records but also alleviating the dimensional disaster accompanying the traditional TF-IDF framework. With the approaching of precision medicine, there will be abundant electronic records of medicine and health awaiting for deep analysis, and we expect to see a wide spectrum of applications borrowing the idea of our method in the near future.
Semantic Indexing of Medical Learning Objects: Medical Students' Usage of a Semantic Network
Gießler, Paul; Ohnesorge-Radtke, Ursula; Spreckelsen, Cord
2015-01-01
Background The Semantically Annotated Media (SAM) project aims to provide a flexible platform for searching, browsing, and indexing medical learning objects (MLOs) based on a semantic network derived from established classification systems. Primarily, SAM supports the Aachen emedia skills lab, but SAM is ready for indexing distributed content and the Simple Knowledge Organizing System standard provides a means for easily upgrading or even exchanging SAM’s semantic network. There is a lack of research addressing the usability of MLO indexes or search portals like SAM and the user behavior with such platforms. Objective The purpose of this study was to assess the usability of SAM by investigating characteristic user behavior of medical students accessing MLOs via SAM. Methods In this study, we chose a mixed-methods approach. Lean usability testing was combined with usability inspection by having the participants complete four typical usage scenarios before filling out a questionnaire. The questionnaire was based on the IsoMetrics usability inventory. Direct user interaction with SAM (mouse clicks and pages accessed) was logged. Results The study analyzed the typical usage patterns and habits of students using a semantic network for accessing MLOs. Four scenarios capturing characteristics of typical tasks to be solved by using SAM yielded high ratings of usability items and showed good results concerning the consistency of indexing by different users. Long-tail phenomena emerge as they are typical for a collaborative Web 2.0 platform. Suitable but nonetheless rarely used keywords were assigned to MLOs by some users. Conclusions It is possible to develop a Web-based tool with high usability and acceptance for indexing and retrieval of MLOs. SAM can be applied to indexing multicentered repositories of MLOs collaboratively. PMID:27731860
Knowledge-based understanding of aerial surveillance video
NASA Astrophysics Data System (ADS)
Cheng, Hui; Butler, Darren
2006-05-01
Aerial surveillance has long been used by the military to locate, monitor and track the enemy. Recently, its scope has expanded to include law enforcement activities, disaster management and commercial applications. With the ever-growing amount of aerial surveillance video acquired daily, there is an urgent need for extracting actionable intelligence in a timely manner. Furthermore, to support high-level video understanding, this analysis needs to go beyond current approaches and consider the relationships, motivations and intentions of the objects in the scene. In this paper we propose a system for interpreting aerial surveillance videos that automatically generates a succinct but meaningful description of the observed regions, objects and events. For a given video, the semantics of important regions and objects, and the relationships between them, are summarised into a semantic concept graph. From this, a textual description is derived that provides new search and indexing options for aerial video and enables the fusion of aerial video with other information modalities, such as human intelligence, reports and signal intelligence. Using a Mixture-of-Experts video segmentation algorithm an aerial video is first decomposed into regions and objects with predefined semantic meanings. The objects are then tracked and coerced into a semantic concept graph and the graph is summarized spatially, temporally and semantically using ontology guided sub-graph matching and re-writing. The system exploits domain specific knowledge and uses a reasoning engine to verify and correct the classes, identities and semantic relationships between the objects. This approach is advantageous because misclassifications lead to knowledge contradictions and hence they can be easily detected and intelligently corrected. In addition, the graph representation highlights events and anomalies that a low-level analysis would overlook.
Semantic Indexing of Medical Learning Objects: Medical Students' Usage of a Semantic Network.
Tix, Nadine; Gießler, Paul; Ohnesorge-Radtke, Ursula; Spreckelsen, Cord
2015-11-11
The Semantically Annotated Media (SAM) project aims to provide a flexible platform for searching, browsing, and indexing medical learning objects (MLOs) based on a semantic network derived from established classification systems. Primarily, SAM supports the Aachen emedia skills lab, but SAM is ready for indexing distributed content and the Simple Knowledge Organizing System standard provides a means for easily upgrading or even exchanging SAM's semantic network. There is a lack of research addressing the usability of MLO indexes or search portals like SAM and the user behavior with such platforms. The purpose of this study was to assess the usability of SAM by investigating characteristic user behavior of medical students accessing MLOs via SAM. In this study, we chose a mixed-methods approach. Lean usability testing was combined with usability inspection by having the participants complete four typical usage scenarios before filling out a questionnaire. The questionnaire was based on the IsoMetrics usability inventory. Direct user interaction with SAM (mouse clicks and pages accessed) was logged. The study analyzed the typical usage patterns and habits of students using a semantic network for accessing MLOs. Four scenarios capturing characteristics of typical tasks to be solved by using SAM yielded high ratings of usability items and showed good results concerning the consistency of indexing by different users. Long-tail phenomena emerge as they are typical for a collaborative Web 2.0 platform. Suitable but nonetheless rarely used keywords were assigned to MLOs by some users. It is possible to develop a Web-based tool with high usability and acceptance for indexing and retrieval of MLOs. SAM can be applied to indexing multicentered repositories of MLOs collaboratively.
Sollmann, Nico; Tanigawa, Noriko; Tussis, Lorena; Hauck, Theresa; Ille, Sebastian; Maurer, Stefanie; Negwer, Chiara; Zimmer, Claus; Ringel, Florian; Meyer, Bernhard; Krieg, Sandro M
2015-04-01
Knowledge about the cortical representation of semantic processing is mainly derived from functional magnetic resonance imaging (fMRI) or direct cortical stimulation (DCS) studies. Because DCS is regarded as the gold standard in terms of language mapping but can only be used during awake surgery due to its invasive character, repetitive navigated transcranial magnetic stimulation (rTMS)—a non-invasive modality that uses a similar technique as DCS—seems highly feasible for use in the investigation of semantic processing in the healthy human brain. A total number of 100 (50 left-hemispheric and 50 right-hemispheric) rTMS-based language mappings were performed in 50 purely right-handed, healthy volunteers during an object-naming task. All rTMS-induced semantic naming errors were then counted and evaluated systematically. Furthermore, since the distribution of stimulations within both hemispheres varied between individuals and cortical regions stimulated, all elicited errors were standardized and subsequently related to their cortical sites by projecting the mapping results into the cortical parcellation system (CPS). Overall, the most left-hemispheric semantic errors were observed after targeting the rTMS to the posterior middle frontal gyrus (pMFG; standardized error rate: 7.3‰), anterior supramarginal gyrus (aSMG; 5.6‰), and ventral postcentral gyrus (vPoG; 5.0‰). In contrast to that, the highest right-hemispheric error rates occurred after stimulation of the posterior superior temporal gyrus (pSTG; 12.4‰), middle superior temporal gyrus (mSTG; 6.2‰), and anterior supramarginal gyrus (aSMG; 6.2‰). Although error rates were low, the rTMS-based approach of investigating semantic processing during object naming shows convincing results compared to the current literature. Therefore, rTMS seems a valuable, safe, and reliable tool for the investigation of semantic processing within the healthy human brain. Copyright © 2015 Elsevier Ltd. All rights reserved.
Huebner, Philip A.; Willits, Jon A.
2018-01-01
Previous research has suggested that distributional learning mechanisms may contribute to the acquisition of semantic knowledge. However, distributional learning mechanisms, statistical learning, and contemporary “deep learning” approaches have been criticized for being incapable of learning the kind of abstract and structured knowledge that many think is required for acquisition of semantic knowledge. In this paper, we show that recurrent neural networks, trained on noisy naturalistic speech to children, do in fact learn what appears to be abstract and structured knowledge. We trained two types of recurrent neural networks (Simple Recurrent Network, and Long Short-Term Memory) to predict word sequences in a 5-million-word corpus of speech directed to children ages 0–3 years old, and assessed what semantic knowledge they acquired. We found that learned internal representations are encoding various abstract grammatical and semantic features that are useful for predicting word sequences. Assessing the organization of semantic knowledge in terms of the similarity structure, we found evidence of emergent categorical and hierarchical structure in both models. We found that the Long Short-term Memory (LSTM) and SRN are both learning very similar kinds of representations, but the LSTM achieved higher levels of performance on a quantitative evaluation. We also trained a non-recurrent neural network, Skip-gram, on the same input to compare our results to the state-of-the-art in machine learning. We found that Skip-gram achieves relatively similar performance to the LSTM, but is representing words more in terms of thematic compared to taxonomic relations, and we provide reasons why this might be the case. Our findings show that a learning system that derives abstract, distributed representations for the purpose of predicting sequential dependencies in naturalistic language may provide insight into emergence of many properties of the developing semantic system. PMID:29520243
Technical Note: On GAFChromic EBT-XD film and the lateral response artifact.
Lewis, David F; Chan, Maria F
2016-02-01
The new radiochromic film, GAFChromic EBT-XD, contains the same active material, lithium-10,12-pentacosadiynoate, as GAFChromic EBT3, but the crystalline form is different. This work investigates the effect of this change on the well-known lateral response artifact when EBT-XD film is digitized on a flatbed scanner. The dose response of a single production lot of EBT-XD was characterized by scanning an unexposed film plus a set of films exposed to doses between 2.5 and 50 Gy using 6 MV photons. To characterize the lateral response artifact, the authors used the unexposed film plus a subset of samples exposed to doses between 20 and 50 Gy. Digital images of these films were acquired at seven discrete lateral locations perpendicular to the scan direction on three Epson 10000XL scanners. Using measurements at the discrete lateral positions, the scanner responses were determined as a function of the lateral position of the film. From the data for each scanner, a set of coefficients were derived whereby measured response values could be corrected to remove the effects of the lateral response artifact. The EBT-XD data were analyzed as in their previous work and compared to results reported for EBT3 in that paper. For films scanned in the same orientation and having equal responses, the authors found that the lateral response artifact for EBT-XD and EBT3 films was remarkably similar. For both films, the artifact increases with increased net response. However, as EBT-XD is less sensitive than EBT3, a greater exposure dose is required to reach the same net response. On this basis, the lower sensitivity of EBT-XD relative to EBT3 results in less net response change for equal exposure and a reduction in the impact of the lateral response artifact. The shape of the crystalline active component in EBT-XD and EBT3 does not affect the fundamental existence of the lateral response artifact when the films are digitized on flatbed scanners. Owing its lower sensitivity, EBT-XD film requires higher dose to reach the same response as EBT3, resulting in lesser impact of the lateral response artifact. For doses >10 Gy, the slopes of the EBT-XD red and green channel dose response curves are greater than the corresponding ones for EBT3. For these two reasons, the authors prefer EBT-XD for doses exceeding about 10 Gy.
Technical Note: On GAFChromic EBT-XD film and the lateral response artifact
Lewis, David F.
2016-01-01
Purpose: The new radiochromic film, GAFChromic EBT-XD, contains the same active material, lithium-10,12-pentacosadiynoate, as GAFChromic EBT3, but the crystalline form is different. This work investigates the effect of this change on the well-known lateral response artifact when EBT-XD film is digitized on a flatbed scanner. Methods: The dose response of a single production lot of EBT-XD was characterized by scanning an unexposed film plus a set of films exposed to doses between 2.5 and 50 Gy using 6 MV photons. To characterize the lateral response artifact, the authors used the unexposed film plus a subset of samples exposed to doses between 20 and 50 Gy. Digital images of these films were acquired at seven discrete lateral locations perpendicular to the scan direction on three Epson 10000XL scanners. Using measurements at the discrete lateral positions, the scanner responses were determined as a function of the lateral position of the film. From the data for each scanner, a set of coefficients were derived whereby measured response values could be corrected to remove the effects of the lateral response artifact. The EBT-XD data were analyzed as in their previous work and compared to results reported for EBT3 in that paper. Results: For films scanned in the same orientation and having equal responses, the authors found that the lateral response artifact for EBT-XD and EBT3 films was remarkably similar. For both films, the artifact increases with increased net response. However, as EBT-XD is less sensitive than EBT3, a greater exposure dose is required to reach the same net response. On this basis, the lower sensitivity of EBT-XD relative to EBT3 results in less net response change for equal exposure and a reduction in the impact of the lateral response artifact. Conclusions: The shape of the crystalline active component in EBT-XD and EBT3 does not affect the fundamental existence of the lateral response artifact when the films are digitized on flatbed scanners. Owing its lower sensitivity, EBT-XD film requires higher dose to reach the same response as EBT3, resulting in lesser impact of the lateral response artifact. For doses >10 Gy, the slopes of the EBT-XD red and green channel dose response curves are greater than the corresponding ones for EBT3. For these two reasons, the authors prefer EBT-XD for doses exceeding about 10 Gy. PMID:26843228
Construction of an annotated corpus to support biomedical information extraction
Thompson, Paul; Iqbal, Syed A; McNaught, John; Ananiadou, Sophia
2009-01-01
Background Information Extraction (IE) is a component of text mining that facilitates knowledge discovery by automatically locating instances of interesting biomedical events from huge document collections. As events are usually centred on verbs and nominalised verbs, understanding the syntactic and semantic behaviour of these words is highly important. Corpora annotated with information concerning this behaviour can constitute a valuable resource in the training of IE components and resources. Results We have defined a new scheme for annotating sentence-bound gene regulation events, centred on both verbs and nominalised verbs. For each event instance, all participants (arguments) in the same sentence are identified and assigned a semantic role from a rich set of 13 roles tailored to biomedical research articles, together with a biological concept type linked to the Gene Regulation Ontology. To our knowledge, our scheme is unique within the biomedical field in terms of the range of event arguments identified. Using the scheme, we have created the Gene Regulation Event Corpus (GREC), consisting of 240 MEDLINE abstracts, in which events relating to gene regulation and expression have been annotated by biologists. A novel method of evaluating various different facets of the annotation task showed that average inter-annotator agreement rates fall within the range of 66% - 90%. Conclusion The GREC is a unique resource within the biomedical field, in that it annotates not only core relationships between entities, but also a range of other important details about these relationships, e.g., location, temporal, manner and environmental conditions. As such, it is specifically designed to support bio-specific tool and resource development. It has already been used to acquire semantic frames for inclusion within the BioLexicon (a lexical, terminological resource to aid biomedical text mining). Initial experiments have also shown that the corpus may viably be used to train IE components, such as semantic role labellers. The corpus and annotation guidelines are freely available for academic purposes. PMID:19852798
WebGIS based on semantic grid model and web services
NASA Astrophysics Data System (ADS)
Zhang, WangFei; Yue, CaiRong; Gao, JianGuo
2009-10-01
As the combination point of the network technology and GIS technology, WebGIS has got the fast development in recent years. With the restriction of Web and the characteristics of GIS, traditional WebGIS has some prominent problems existing in development. For example, it can't accomplish the interoperability of heterogeneous spatial databases; it can't accomplish the data access of cross-platform. With the appearance of Web Service and Grid technology, there appeared great change in field of WebGIS. Web Service provided an interface which can give information of different site the ability of data sharing and inter communication. The goal of Grid technology was to make the internet to a large and super computer, with this computer we can efficiently implement the overall sharing of computing resources, storage resource, data resource, information resource, knowledge resources and experts resources. But to WebGIS, we only implement the physically connection of data and information and these is far from the enough. Because of the different understanding of the world, following different professional regulations, different policies and different habits, the experts in different field will get different end when they observed the same geographic phenomenon and the semantic heterogeneity produced. Since these there are large differences to the same concept in different field. If we use the WebGIS without considering of the semantic heterogeneity, we will answer the questions users proposed wrongly or we can't answer the questions users proposed. To solve this problem, this paper put forward and experienced an effective method of combing semantic grid and Web Services technology to develop WebGIS. In this paper, we studied the method to construct ontology and the method to combine Grid technology and Web Services and with the detailed analysis of computing characteristics and application model in the distribution of data, we designed the WebGIS query system driven by ontology based on Grid technology and Web Services.
Partial volume correction and image analysis methods for intersubject comparison of FDG-PET studies
NASA Astrophysics Data System (ADS)
Yang, Jun
2000-12-01
Partial volume effect is an artifact mainly due to the limited imaging sensor resolution. It creates bias in the measured activity in small structures and around tissue boundaries. In brain FDG-PET studies, especially for Alzheimer's disease study where there is serious gray matter atrophy, accurate estimate of cerebral metabolic rate of glucose is even more problematic due to large amount of partial volume effect. In this dissertation, we developed a framework enabling inter-subject comparison of partial volume corrected brain FDG-PET studies. The framework is composed of the following image processing steps: (1)MRI segmentation, (2)MR-PET registration, (3)MR based PVE correction, (4)MR 3D inter-subject elastic mapping. Through simulation studies, we showed that the newly developed partial volume correction methods, either pixel based or ROI based, performed better than previous methods. By applying this framework to a real Alzheimer's disease study, we demonstrated that the partial volume corrected glucose rates vary significantly among the control, at risk and disease patient groups and this framework is a promising tool useful for assisting early identification of Alzheimer's patients.
Anderson, Andrew James; Bruni, Elia; Lopopolo, Alessandro; Poesio, Massimo; Baroni, Marco
2015-10-15
Embodiment theory predicts that mental imagery of object words recruits neural circuits involved in object perception. The degree of visual imagery present in routine thought and how it is encoded in the brain is largely unknown. We test whether fMRI activity patterns elicited by participants reading objects' names include embodied visual-object representations, and whether we can decode the representations using novel computational image-based semantic models. We first apply the image models in conjunction with text-based semantic models to test predictions of visual-specificity of semantic representations in different brain regions. Representational similarity analysis confirms that fMRI structure within ventral-temporal and lateral-occipital regions correlates most strongly with the image models and conversely text models correlate better with posterior-parietal/lateral-temporal/inferior-frontal regions. We use an unsupervised decoding algorithm that exploits commonalities in representational similarity structure found within both image model and brain data sets to classify embodied visual representations with high accuracy (8/10) and then extend it to exploit model combinations to robustly decode different brain regions in parallel. By capturing latent visual-semantic structure our models provide a route into analyzing neural representations derived from past perceptual experience rather than stimulus-driven brain activity. Our results also verify the benefit of combining multimodal data to model human-like semantic representations. Copyright © 2015 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Dolby, James L.
1984-01-01
Suggests structure based on two sets of principles for deriving meaning from data: Shannon's measure of entropy, which provides means of measuring amount of information in message; and Ranganathan's faceted classification scheme, which provides means of determining number of meaningful data. Syntax, meaning, and semantics of data are discussed.…
An interactive dynamic analysis and decision support software for MR mammography.
Ertaş, Gökhan; Gülçür, H Ozcan; Tunaci, Mehtap
2008-06-01
A fully automated software is introduced to facilitate MR mammography (MRM) examinations and overcome subjectiveness in diagnosis using normalized maximum intensity-time ratio (nMITR) maps. These maps inherently suppress enhancements due to normal parenchyma and blood vessels that surround lesions and have natural tolerance to small field inhomogeneities and motion artifacts. The classifier embedded within the software is trained with normalized complexity and maximum nMITR of 22 lesions and tested with the features of remaining 22 lesions. Achieved diagnostic performances are 92% sensitivity, 90% specificity, 91% accuracy, 92% positive predictive value and 90% negative predictive value. DynaMammoAnalyst shortens evaluation time considerably and reduces inter and intra-observer variability by providing decision support.
Semantic features of 'stepped' versus 'continuous' contours in German intonation.
Dombrowski, Ernst
2013-01-01
This study analyses the meaning spaces of German pitch contours using two modes of melodic movement: continuous or in steps of sustained pitch. Both the continuous and stepped movements are represented by a set of five basic patterns, the latter being derived from the former. Thirty-six German native speakers judged the pattern sets on a 12-scale semantic differential. The semantic profiles confirm that stepped contours can be conceived of as stylized intonation, in a formal as well as in a functional sense. On the one hand, continuous (non-stylized) and stepped (stylized) contours are assigned different overall meanings (especially on the scales astonished - commonplace and interested - not interested). On the other hand, listeners organize the two contour sets in a similar fashion, which speaks in favour of parallel pattern inventories of continuous and stepped movement, respectively. However, the meaning space of the stylized patterns is affected by formal restrictions, for instance in the step transformation of continuous rises. © 2014 S. Karger AG, Basel.
Bratsas, Charalampos; Koutkias, Vassilis; Kaimakamis, Evangelos; Bamidis, Panagiotis; Maglaveras, Nicos
2007-01-01
Medical Computational Problem (MCP) solving is related to medical problems and their computerized algorithmic solutions. In this paper, an extension of an ontology-based model to fuzzy logic is presented, as a means to enhance the information retrieval (IR) procedure in semantic management of MCPs. We present herein the methodology followed for the fuzzy expansion of the ontology model, the fuzzy query expansion procedure, as well as an appropriate ontology-based Vector Space Model (VSM) that was constructed for efficient mapping of user-defined MCP search criteria and MCP acquired knowledge. The relevant fuzzy thesaurus is constructed by calculating the simultaneous occurrences of terms and the term-to-term similarities derived from the ontology that utilizes UMLS (Unified Medical Language System) concepts by using Concept Unique Identifiers (CUI), synonyms, semantic types, and broader-narrower relationships for fuzzy query expansion. The current approach constitutes a sophisticated advance for effective, semantics-based MCP-related IR.
NASA Astrophysics Data System (ADS)
Macioł, Piotr; Regulski, Krzysztof
2016-08-01
We present a process of semantic meta-model development for data management in an adaptable multiscale modeling framework. The main problems in ontology design are discussed, and a solution achieved as a result of the research is presented. The main concepts concerning the application and data management background for multiscale modeling were derived from the AM3 approach—object-oriented Agile multiscale modeling methodology. The ontological description of multiscale models enables validation of semantic correctness of data interchange between submodels. We also present a possibility of using the ontological model as a supervisor in conjunction with a multiscale model controller and a knowledge base system. Multiscale modeling formal ontology (MMFO), designed for describing multiscale models' data and structures, is presented. A need for applying meta-ontology in the MMFO development process is discussed. Examples of MMFO application in describing thermo-mechanical treatment of metal alloys are discussed. Present and future applications of MMFO are described.
Incremental Query Rewriting with Resolution
NASA Astrophysics Data System (ADS)
Riazanov, Alexandre; Aragão, Marcelo A. T.
We address the problem of semantic querying of relational databases (RDB) modulo knowledge bases using very expressive knowledge representation formalisms, such as full first-order logic or its various fragments. We propose to use a resolution-based first-order logic (FOL) reasoner for computing schematic answers to deductive queries, with the subsequent translation of these schematic answers to SQL queries which are evaluated using a conventional relational DBMS. We call our method incremental query rewriting, because an original semantic query is rewritten into a (potentially infinite) series of SQL queries. In this chapter, we outline the main idea of our technique - using abstractions of databases and constrained clauses for deriving schematic answers, and provide completeness and soundness proofs to justify the applicability of this technique to the case of resolution for FOL without equality. The proposed method can be directly used with regular RDBs, including legacy databases. Moreover, we propose it as a potential basis for an efficient Web-scale semantic search technology.
Hulse, Nathan C; Long, Jie; Tao, Cui
2013-01-01
Infobuttons have been established to be an effective resource for addressing information needs at the point of care, as evidenced by recent research and their inclusion in government-based electronic health record incentive programs in the United States. Yet their utility has been limited to wide success for only a specific set of domains (lab data, medication orders, and problem lists) and only for discrete, singular concepts that are already documented in the electronic medical record. In this manuscript, we present an effort to broaden their utility by connecting a semantic web-based phenotyping engine with an infobutton framework in order to identify and address broader issues in patient data, derived from multiple data sources. We have tested these patterns by defining and testing semantic definitions of pre-diabetes and metabolic syndrome. We intend to carry forward relevant information to the infobutton framework to present timely, relevant education resources to patients and providers.
Target volume and artifact evaluation of a new data-driven 4D CT.
Martin, Rachael; Pan, Tinsu
Four-dimensional computed tomography (4D CT) is often used to define the internal gross target volume (IGTV) for radiation therapy of lung cancer. Traditionally, this technique requires the use of an external motion surrogate; however, a new image, data-driven 4D CT, has become available. This study aims to describe this data-driven 4D CT and compare target contours created with it to those created using standard 4D CT. Cine CT data of 35 patients undergoing stereotactic body radiation therapy were collected and sorted into phases using standard and data-driven 4D CT. IGTV contours were drawn using a semiautomated method on maximum intensity projection images of both 4D CT methods. Errors resulting from reproducibility of the method were characterized. A comparison of phase image artifacts was made using a normalized cross-correlation method that assigned a score from +1 (data-driven "better") to -1 (standard "better"). The volume difference between the data-driven and standard IGTVs was not significant (data driven was 2.1 ± 1.0% smaller, P = .08). The Dice similarity coefficient showed good similarity between the contours (0.949 ± 0.006). The mean surface separation was 0.4 ± 0.1 mm and the Hausdorff distance was 3.1 ± 0.4 mm. An average artifact score of +0.37 indicated that the data-driven method had significantly fewer and/or less severe artifacts than the standard method (P = 1.5 × 10 -5 for difference from 0). On average, the difference between IGTVs derived from data-driven and standard 4D CT was not clinically relevant or statistically significant, suggesting data-driven 4D CT can be used in place of standard 4D CT without adjustments to IGTVs. The relatively large differences in some patients were usually attributed to limitations in automatic contouring or differences in artifacts. Artifact reduction and setup simplicity suggest a clinical advantage to data-driven 4D CT. Published by Elsevier Inc.
Miksys, N; Xu, C; Beaulieu, L; Thomson, R M
2015-08-07
This work investigates and compares CT image metallic artifact reduction (MAR) methods and tissue assignment schemes (TAS) for the development of virtual patient models for permanent implant brachytherapy Monte Carlo (MC) dose calculations. Four MAR techniques are investigated to mitigate seed artifacts from post-implant CT images of a homogeneous phantom and eight prostate patients: a raw sinogram approach using the original CT scanner data and three methods (simple threshold replacement (STR), 3D median filter, and virtual sinogram) requiring only the reconstructed CT image. Virtual patient models are developed using six TAS ranging from the AAPM-ESTRO-ABG TG-186 basic approach of assigning uniform density tissues (resulting in a model not dependent on MAR) to more complex models assigning prostate, calcification, and mixtures of prostate and calcification using CT-derived densities. The EGSnrc user-code BrachyDose is employed to calculate dose distributions. All four MAR methods eliminate bright seed spot artifacts, and the image-based methods provide comparable mitigation of artifacts compared with the raw sinogram approach. However, each MAR technique has limitations: STR is unable to mitigate low CT number artifacts, the median filter blurs the image which challenges the preservation of tissue heterogeneities, and both sinogram approaches introduce new streaks. Large local dose differences are generally due to differences in voxel tissue-type rather than mass density. The largest differences in target dose metrics (D90, V100, V150), over 50% lower compared to the other models, are when uncorrected CT images are used with TAS that consider calcifications. Metrics found using models which include calcifications are generally a few percent lower than prostate-only models. Generally, metrics from any MAR method and any TAS which considers calcifications agree within 6%. Overall, the studied MAR methods and TAS show promise for further retrospective MC dose calculation studies for various permanent implant brachytherapy treatments.
Matsumura, Kenta; Rolfe, Peter; Lee, Jihyoung; Yamakoshi, Takehiro
2014-01-01
Recent progress in information and communication technologies has made it possible to measure heart rate (HR) and normalized pulse volume (NPV), which are important physiological indices, using only a smartphone. This has been achieved with reflection mode photoplethysmography (PPG), by using a smartphone’s embedded flash as a light source and the camera as a light sensor. Despite its widespread use, the method of PPG is susceptible to motion artifacts as physical displacements influence photon propagation phenomena and, thereby, the effective optical path length. Further, it is known that the wavelength of light used for PPG influences the photon penetration depth and we therefore hypothesized that influences of motion artifact could be wavelength-dependant. To test this hypothesis, we made measurements in 12 healthy volunteers of HR and NPV derived from reflection mode plethysmograms recorded simultaneously at three different spectral regions (red, green and blue) at the same physical location with a smartphone. We then assessed the accuracy of the HR and NPV measurements under the influence of motion artifacts. The analyses revealed that the accuracy of HR was acceptably high with all three wavelengths (all rs > 0.996, fixed biases: −0.12 to 0.10 beats per minute, proportional biases: r = −0.29 to 0.03), but that of NPV was the best with green light (r = 0.791, fixed biases: −0.01 arbitrary units, proportional bias: r = 0.11). Moreover, the signal-to-noise ratio obtained with green and blue light PPG was higher than that of red light PPG. These findings suggest that green is the most suitable color for measuring HR and NPV from the reflection mode photoplethysmogram under motion artifact conditions. We conclude that the use of green light PPG could be of particular benefit in ambulatory monitoring where motion artifacts are a significant issue. PMID:24618594
A deep convolutional neural network using directional wavelets for low-dose X-ray CT reconstruction.
Kang, Eunhee; Min, Junhong; Ye, Jong Chul
2017-10-01
Due to the potential risk of inducing cancer, radiation exposure by X-ray CT devices should be reduced for routine patient scanning. However, in low-dose X-ray CT, severe artifacts typically occur due to photon starvation, beam hardening, and other causes, all of which decrease the reliability of the diagnosis. Thus, a high-quality reconstruction method from low-dose X-ray CT data has become a major research topic in the CT community. Conventional model-based de-noising approaches are, however, computationally very expensive, and image-domain de-noising approaches cannot readily remove CT-specific noise patterns. To tackle these problems, we want to develop a new low-dose X-ray CT algorithm based on a deep-learning approach. We propose an algorithm which uses a deep convolutional neural network (CNN) which is applied to the wavelet transform coefficients of low-dose CT images. More specifically, using a directional wavelet transform to extract the directional component of artifacts and exploit the intra- and inter- band correlations, our deep network can effectively suppress CT-specific noise. In addition, our CNN is designed with a residual learning architecture for faster network training and better performance. Experimental results confirm that the proposed algorithm effectively removes complex noise patterns from CT images derived from a reduced X-ray dose. In addition, we show that the wavelet-domain CNN is efficient when used to remove noise from low-dose CT compared to existing approaches. Our results were rigorously evaluated by several radiologists at the Mayo Clinic and won second place at the 2016 "Low-Dose CT Grand Challenge." To the best of our knowledge, this work is the first deep-learning architecture for low-dose CT reconstruction which has been rigorously evaluated and proven to be effective. In addition, the proposed algorithm, in contrast to existing model-based iterative reconstruction (MBIR) methods, has considerable potential to benefit from large data sets. Therefore, we believe that the proposed algorithm opens a new direction in the area of low-dose CT research. © 2017 American Association of Physicists in Medicine.
Dollfus, Sonia; Razafimandimby, Annick; Maiza, Olivier; Lebain, Pierrick; Brazo, Perrine; Beaucousin, Virginie; Lecardeur, Laurent; Delamillieure, Pascal; Mazoyer, Bernard; Tzourio-Mazoyer, Nathalie
2008-02-01
We and others have observed that patients with schizophrenia commonly presented a reduced left recruitment in language semantic brain regions. However, most studies include patients with leftward and rightward lateralizations for language. We investigated whether a cohort comprised purely of patients with typical lateralization (leftward) presented a reduced left recruitment in semantic regions during a language comprehension task. The goal was to reduce the inter-subject variability and thus improve the resolution for studying functional abnormalities in the language network. Twenty-three patients with schizophrenia (DSM-IV) were matched with healthy subjects in age, sex, level of education and handedness. All patients exhibited leftward lateralization for language. Functional MRI was performed as subjects listened to a story comprising characters and social interactions. Functional MRI signal variations were analyzed individually and compared among groups. Although no differences were observed in the recruitment of the semantic language network, patients with schizophrenia presented significantly lower signal variations compared to controls in the medial part of the left superior frontal gyrus (MF1) (x=-6, y=58, z=20; Z(score)=5.6; p<0.001 uncorrected). This region corresponded to the Theory of Mind (ToM) network. Only 5 of the 23 patients (21.7%) and 21 of the 23 (91.3%) control subjects demonstrated a positive signal variation in this area. A left functional deficit was observed in a core region of the ToM network in patients with schizophrenia and typical lateralizations for language. This functional defect could represent a neural basis for impaired social interaction and communication in patients with schizophrenia.
Building a comprehensive syntactic and semantic corpus of Chinese clinical texts.
He, Bin; Dong, Bin; Guan, Yi; Yang, Jinfeng; Jiang, Zhipeng; Yu, Qiubin; Cheng, Jianyi; Qu, Chunyan
2017-05-01
To build a comprehensive corpus covering syntactic and semantic annotations of Chinese clinical texts with corresponding annotation guidelines and methods as well as to develop tools trained on the annotated corpus, which supplies baselines for research on Chinese texts in the clinical domain. An iterative annotation method was proposed to train annotators and to develop annotation guidelines. Then, by using annotation quality assurance measures, a comprehensive corpus was built, containing annotations of part-of-speech (POS) tags, syntactic tags, entities, assertions, and relations. Inter-annotator agreement (IAA) was calculated to evaluate the annotation quality and a Chinese clinical text processing and information extraction system (CCTPIES) was developed based on our annotated corpus. The syntactic corpus consists of 138 Chinese clinical documents with 47,426 tokens and 2612 full parsing trees, while the semantic corpus includes 992 documents that annotated 39,511 entities with their assertions and 7693 relations. IAA evaluation shows that this comprehensive corpus is of good quality, and the system modules are effective. The annotated corpus makes a considerable contribution to natural language processing (NLP) research into Chinese texts in the clinical domain. However, this corpus has a number of limitations. Some additional types of clinical text should be introduced to improve corpus coverage and active learning methods should be utilized to promote annotation efficiency. In this study, several annotation guidelines and an annotation method for Chinese clinical texts were proposed, and a comprehensive corpus with its NLP modules were constructed, providing a foundation for further study of applying NLP techniques to Chinese texts in the clinical domain. Copyright © 2017. Published by Elsevier Inc.
Zhu, Feifei; Zhang, Qinglin; Qiu, Jiang
2013-01-01
Creativity can be defined the capacity of an individual to produce something original and useful. An important measurable component of creativity is divergent thinking. Despite existing studies on creativity-related cerebral structural basis, no study has used a large sample to investigate the relationship between individual verbal creativity and regional gray matter volumes (GMVs) and white matter volumes (WMVs). In the present work, optimal voxel-based morphometry (VBM) was employed to identify the structure that correlates verbal creativity (measured by the verbal form of Torrance Tests of Creative Thinking) across the brain in young healthy subjects. Verbal creativity was found to be significantly positively correlated with regional GMV in the left inferior frontal gyrus (IFG), which is believed to be responsible for language production and comprehension, new semantic representation, and memory retrieval, and in the right IFG, which may involve inhibitory control and attention switching. A relationship between verbal creativity and regional WMV in the left and right IFG was also observed. Overall, a highly verbal creative individual with superior verbal skills may demonstrate a greater computational efficiency in the brain areas involved in high-level cognitive processes including language production, semantic representation and cognitive control. PMID:24223921
The XSD-Builder Specification Language—Toward a Semantic View of XML Schema Definition
NASA Astrophysics Data System (ADS)
Fong, Joseph; Cheung, San Kuen
In the present database market, XML database model is a main structure for the forthcoming database system in the Internet environment. As a conceptual schema of XML database, XML Model has its limitation on presenting its data semantics. System analyst has no toolset for modeling and analyzing XML system. We apply XML Tree Model (shown in Figure 2) as a conceptual schema of XML database to model and analyze the structure of an XML database. It is important not only for visualizing, specifying, and documenting structural models, but also for constructing executable systems. The tree model represents inter-relationship among elements inside different logical schema such as XML Schema Definition (XSD), DTD, Schematron, XDR, SOX, and DSD (shown in Figure 1, an explanation of the terms in the figure are shown in Table 1). The XSD-Builder consists of XML Tree Model, source language, translator, and XSD. The source language is called XSD-Source which is mainly for providing an environment with concept of user friendliness while writing an XSD. The source language will consequently be translated by XSD-Translator. Output of XSD-Translator is an XSD which is our target and is called as an object language.
Franca, Carolina da; Colares, Viviane
2010-06-01
The objective of this article is to translate, to adapt and to validate the National College Health Risk Behavior Survey to apply at Brazilian college students. 208 college students from the Federal University of Pernambuco (UFPE) and University of Pernambuco (UPE) participated in the study. The validation was carried through in five stages: (1) translation; (2) retrotranslation; (3) correction and semantic adaptation (cultural adaptation); (4) face validation; (5) test-retest. Adaptations were done to deal with any semantic disagreements found between translation and retrotranslation. After face validation, the questionnaire was reduced from 96 to 52 questions. From the 11 items analyzed, the majority presented good and perfect Kappa: security and violence (Kappa=0.89); suicide (Kappa=1.00); use of the tobacco (Kappa=0.90); drinking consumption (Kappa=0.78); cocaine and other drugs consumption (Kappa=0.70); sexual behavior (Kappa=0,88) and corporal weight (Kappa=0.89). Only the item about feeding presented weak Inter-examiner Kappa (Kappa = 0.26) and the topic on health information presented moderate Kappa (Kappa=0.56). The average Kappa for all items was good (0.76). The instrument may be considered validated in the Portuguese language in Brazil with acceptable reproducibility.
Zhu, Feifei; Zhang, Qinglin; Qiu, Jiang
2013-01-01
Creativity can be defined the capacity of an individual to produce something original and useful. An important measurable component of creativity is divergent thinking. Despite existing studies on creativity-related cerebral structural basis, no study has used a large sample to investigate the relationship between individual verbal creativity and regional gray matter volumes (GMVs) and white matter volumes (WMVs). In the present work, optimal voxel-based morphometry (VBM) was employed to identify the structure that correlates verbal creativity (measured by the verbal form of Torrance Tests of Creative Thinking) across the brain in young healthy subjects. Verbal creativity was found to be significantly positively correlated with regional GMV in the left inferior frontal gyrus (IFG), which is believed to be responsible for language production and comprehension, new semantic representation, and memory retrieval, and in the right IFG, which may involve inhibitory control and attention switching. A relationship between verbal creativity and regional WMV in the left and right IFG was also observed. Overall, a highly verbal creative individual with superior verbal skills may demonstrate a greater computational efficiency in the brain areas involved in high-level cognitive processes including language production, semantic representation and cognitive control.
"Leading Clocks Lag" and the de Broglie Wavelength
ERIC Educational Resources Information Center
Shuler, Robert L., Jr.
2016-01-01
The forgotten history of de Broglie waves as themselves artifacts of a Lorentz transform, not physical lengths and frequencies to be transformed, causes confusion for students and others. In this paper the de Broglie wavelength is derived and dependence of de Broglie frequency on velocity explained in terms of Einstein synchronized reference frame…
Business Performer-Centered Design of User Interfaces
NASA Astrophysics Data System (ADS)
Sousa, Kênia; Vanderdonckt, Jean
Business Performer-Centered Design of User Interfaces is a new design methodology that adopts business process (BP) definition and a business performer perspective for managing the life cycle of user interfaces of enterprise systems. In this methodology, when the organization has a business process culture, the business processes of an organization are firstly defined according to a traditional methodology for this kind of artifact. These business processes are then transformed into a series of task models that represent the interactive parts of the business processes that will ultimately lead to interactive systems. When the organization has its enterprise systems, but not yet its business processes modeled, the user interfaces of the systems help derive tasks models, which are then used to derive the business processes. The double linking between a business process and a task model, and between a task model and a user interface model makes it possible to ensure traceability of the artifacts in multiple paths and enables a more active participation of business performers in analyzing the resulting user interfaces. In this paper, we outline how a human-perspective is used tied to a model-driven perspective.
Perceptually Guided Photo Retargeting.
Xia, Yingjie; Zhang, Luming; Hong, Richang; Nie, Liqiang; Yan, Yan; Shao, Ling
2016-04-22
We propose perceptually guided photo retargeting, which shrinks a photo by simulating a human's process of sequentially perceiving visually/semantically important regions in a photo. In particular, we first project the local features (graphlets in this paper) onto a semantic space, wherein visual cues such as global spatial layout and rough geometric context are exploited. Thereafter, a sparsity-constrained learning algorithm is derived to select semantically representative graphlets of a photo, and the selecting process can be interpreted by a path which simulates how a human actively perceives semantics in a photo. Furthermore, we learn the prior distribution of such active graphlet paths (AGPs) from training photos that are marked as esthetically pleasing by multiple users. The learned priors enforce the corresponding AGP of a retargeted photo to be maximally similar to those from the training photos. On top of the retargeting model, we further design an online learning scheme to incrementally update the model with new photos that are esthetically pleasing. The online update module makes the algorithm less dependent on the number and contents of the initial training data. Experimental results show that: 1) the proposed AGP is over 90% consistent with human gaze shifting path, as verified by the eye-tracking data, and 2) the retargeting algorithm outperforms its competitors significantly, as AGP is more indicative of photo esthetics than conventional saliency maps.
Enhancing clinical concept extraction with distributional semantics
Cohen, Trevor; Wu, Stephen; Gonzalez, Graciela
2011-01-01
Extracting concepts (such as drugs, symptoms, and diagnoses) from clinical narratives constitutes a basic enabling technology to unlock the knowledge within and support more advanced reasoning applications such as diagnosis explanation, disease progression modeling, and intelligent analysis of the effectiveness of treatment. The recent release of annotated training sets of de-identified clinical narratives has contributed to the development and refinement of concept extraction methods. However, as the annotation process is labor-intensive, training data are necessarily limited in the concepts and concept patterns covered, which impacts the performance of supervised machine learning applications trained with these data. This paper proposes an approach to minimize this limitation by combining supervised machine learning with empirical learning of semantic relatedness from the distribution of the relevant words in additional unannotated text. The approach uses a sequential discriminative classifier (Conditional Random Fields) to extract the mentions of medical problems, treatments and tests from clinical narratives. It takes advantage of all Medline abstracts indexed as being of the publication type “clinical trials” to estimate the relatedness between words in the i2b2/VA training and testing corpora. In addition to the traditional features such as dictionary matching, pattern matching and part-of-speech tags, we also used as a feature words that appear in similar contexts to the word in question (that is, words that have a similar vector representation measured with the commonly used cosine metric, where vector representations are derived using methods of distributional semantics). To the best of our knowledge, this is the first effort exploring the use of distributional semantics, the semantics derived empirically from unannotated text often using vector space models, for a sequence classification task such as concept extraction. Therefore, we first experimented with different sliding window models and found the model with parameters that led to best performance in a preliminary sequence labeling task. The evaluation of this approach, performed against the i2b2/VA concept extraction corpus, showed that incorporating features based on the distribution of words across a large unannotated corpus significantly aids concept extraction. Compared to a supervised-only approach as a baseline, the micro-averaged f-measure for exact match increased from 80.3% to 82.3% and the micro-averaged f-measure based on inexact match increased from 89.7% to 91.3%. These improvements are highly significant according to the bootstrap resampling method and also considering the performance of other systems. Thus, distributional semantic features significantly improve the performance of concept extraction from clinical narratives by taking advantage of word distribution information obtained from unannotated data. PMID:22085698
Topography of the Lunar Poles and Application to Geodesy with the Lunar Reconnaissance Orbiter
NASA Technical Reports Server (NTRS)
Mazarico, Erwan; Neumann, Gregory A.; Rowlands, David D.; Smith, David E.; Zuber, Maria T.
2012-01-01
The Lunar Orbiter Laser Altimeter (LOLA) [1] onboard the Lunar Reconnaissance Orbiter (LRO) [2] has been operating continuously since July 2009 [3], accumulating approx.5.4 billion measurements from 2 billion on-orbit laser shots. LRO s near-polar orbit results in very high data density in the immediate vicinity of the lunar poles, which are each sampled every 2h. With more than 10,000 orbits, high-resolution maps can be constructed [4] and studied [5]. However, this requires careful processing of the raw data, as subtle errors in the spacecraft position and pointing can lead to visible artifacts in the final map. In other locations on the Moon, ground tracks are subparallel and longitudinal separations are typically a few hundred meters. Near the poles, the track intersection angles can be large and the inter-track spacing is small (above 80 latitude, the effective resolution is better than 50m). Precision Orbit Determination (POD) of the LRO spacecraft [6] was performed to satisfy the LOLA and LRO mission requirements, which lead to a significant improvement in the orbit position knowledge over the short-release navigation products. However, with pixel resolutions of 10 to 25 meters, artifacts due to orbit reconstruction still exist. Here, we show how the complete LOLA dataset at both poles can be adjusted geometrically to produce a high-accuracy, high-resolution maps with minimal track artifacts. We also describe how those maps can then feedback to the POD work, by providing topographic base maps with which individual LOLA altimetric measurements can be contributing to orbit changes. These direct altimetry constraints improve accuracy and can be used more simply than the altimetric crossovers [6].
Hosseini, Zahra; Liu, Junmin; Solovey, Igor; Menon, Ravi S; Drangova, Maria
2017-04-01
To implement and optimize a new approach for susceptibility-weighted image (SWI) generation from multi-echo multi-channel image data and compare its performance against optimized traditional SWI pipelines. Five healthy volunteers were imaged at 7 Tesla. The inter-echo-variance (IEV) channel combination, which uses the variance of the local frequency shift at multiple echo times as a weighting factor during channel combination, was used to calculate multi-echo local phase shift maps. Linear phase masks were combined with the magnitude to generate IEV-SWI. The performance of the IEV-SWI pipeline was compared with that of two accepted SWI pipelines-channel combination followed by (i) Homodyne filtering (HPH-SWI) and (ii) unwrapping and high-pass filtering (SVD-SWI). The filtering steps of each pipeline were optimized. Contrast-to-noise ratio was used as the comparison metric. Qualitative assessment of artifact and vessel conspicuity was performed and processing time of pipelines was evaluated. The optimized IEV-SWI pipeline (σ = 7 mm) resulted in continuous vessel visibility throughout the brain. IEV-SWI had significantly higher contrast compared with HPH-SWI and SVD-SWI (P < 0.001, Friedman nonparametric test). Residual background fields and phase wraps in HPH-SWI and SVD-SWI corrupted the vessel signal and/or generated vessel-mimicking artifact. Optimized implementation of the IEV-SWI pipeline processed a six-echo 16-channel dataset in under 10 min. IEV-SWI benefits from channel-by-channel processing of phase data and results in high contrast images with an optimal balance between contrast and background noise removal, thereby presenting evidence of importance of the order in which postprocessing techniques are applied for multi-channel SWI generation. 2 J. Magn. Reson. Imaging 2017;45:1113-1124. © 2016 International Society for Magnetic Resonance in Medicine.
Dermoscopic Image Segmentation via Multistage Fully Convolutional Networks.
Bi, Lei; Kim, Jinman; Ahn, Euijoon; Kumar, Ashnil; Fulham, Michael; Feng, Dagan
2017-09-01
Segmentation of skin lesions is an important step in the automated computer aided diagnosis of melanoma. However, existing segmentation methods have a tendency to over- or under-segment the lesions and perform poorly when the lesions have fuzzy boundaries, low contrast with the background, inhomogeneous textures, or contain artifacts. Furthermore, the performance of these methods are heavily reliant on the appropriate tuning of a large number of parameters as well as the use of effective preprocessing techniques, such as illumination correction and hair removal. We propose to leverage fully convolutional networks (FCNs) to automatically segment the skin lesions. FCNs are a neural network architecture that achieves object detection by hierarchically combining low-level appearance information with high-level semantic information. We address the issue of FCN producing coarse segmentation boundaries for challenging skin lesions (e.g., those with fuzzy boundaries and/or low difference in the textures between the foreground and the background) through a multistage segmentation approach in which multiple FCNs learn complementary visual characteristics of different skin lesions; early stage FCNs learn coarse appearance and localization information while late-stage FCNs learn the subtle characteristics of the lesion boundaries. We also introduce a new parallel integration method to combine the complementary information derived from individual segmentation stages to achieve a final segmentation result that has accurate localization and well-defined lesion boundaries, even for the most challenging skin lesions. We achieved an average Dice coefficient of 91.18% on the ISBI 2016 Skin Lesion Challenge dataset and 90.66% on the PH2 dataset. Our extensive experimental results on two well-established public benchmark datasets demonstrate that our method is more effective than other state-of-the-art methods for skin lesion segmentation.
PCPPI: a comprehensive database for the prediction of Penicillium-crop protein-protein interactions.
Yue, Junyang; Zhang, Danfeng; Ban, Rongjun; Ma, Xiaojing; Chen, Danyang; Li, Guangwei; Liu, Jia; Wisniewski, Michael; Droby, Samir; Liu, Yongsheng
2017-01-01
Penicillium expansum , the causal agent of blue mold, is one of the most prevalent post-harvest pathogens, infecting a wide range of crops after harvest. In response, crops have evolved various defense systems to protect themselves against this and other pathogens. Penicillium -crop interaction is a multifaceted process and mediated by pathogen- and host-derived proteins. Identification and characterization of the inter-species protein-protein interactions (PPIs) are fundamental to elucidating the molecular mechanisms underlying infection processes between P. expansum and plant crops. Here, we have developed PCPPI, the Penicillium -Crop Protein-Protein Interactions database, which is constructed based on the experimentally determined orthologous interactions in pathogen-plant systems and available domain-domain interactions (DDIs) in each PPI. Thus far, it stores information on 9911 proteins, 439 904 interactions and seven host species, including apple, kiwifruit, maize, pear, rice, strawberry and tomato. Further analysis through the gene ontology (GO) annotation indicated that proteins with more interacting partners tend to execute the essential function. Significantly, semantic statistics of the GO terms also provided strong support for the accuracy of our predicted interactions in PCPPI. We believe that all the PCPPI datasets are helpful to facilitate the study of pathogen-crop interactions and freely available to the research community. : http://bdg.hfut.edu.cn/pcppi/index.html. © The Author(s) 2017. Published by Oxford University Press.
Kramer, Harald; Michaely, Henrik J; Matschl, Volker; Schmitt, Peter; Reiser, Maximilian F; Schoenberg, Stefan O
2007-06-01
Recent developments in hard- and software help to significantly increase image quality of magnetic resonance angiography (MRA). Parallel acquisition techniques (PAT) help to increase spatial resolution and to decrease acquisition time but also suffer from a decrease in signal-to-noise ratio (SNR). The movement to higher field strength and the use of dedicated angiography coils can further increase spatial resolution while decreasing acquisition times at the same SNR as it is known from contemporary exams. The goal of our study was to compare the image quality of MRA datasets acquired with a standard matrix coil in comparison to MRA datasets acquired with a dedicated peripheral angio matrix coil and higher factors of parallel imaging. Before the first volunteer examination, unaccelerated phantom measurements were performed with the different coils. After institutional review board approval, 15 healthy volunteers underwent MRA of the lower extremity on a 32 channel 3.0 Tesla MR System. In 5 of them MRA of the calves was performed with a PAT acceleration factor of 2 and a standard body-matrix surface coil placed at the legs. Ten volunteers underwent MRA of the calves with a dedicated 36-element angiography matrix coil: 5 with a PAT acceleration of 3 and 5 with a PAT acceleration factor of 4, respectively. The acquired volume and acquisition time was approximately the same in all examinations, only the spatial resolution was increased with the acceleration factor. The acquisition time per voxel was calculated. Image quality was rated independently by 2 readers in terms of vessel conspicuity, venous overlay, and occurrence of artifacts. The inter-reader agreement was calculated by the kappa-statistics. SNR and contrast-to-noise ratios from the different examinations were evaluated. All 15 volunteers completed the examination, no adverse events occurred. None of the examinations showed venous overlay; 70% of the examinations showed an excellent vessel conspicuity, whereas in 50% of the examinations artifacts occurred. All of these artifacts were judged as none disturbing. Inter-reader agreement was good with kappa values ranging between 0.65 and 0.74. SNR and contrast-to-noise ratios did not show significant differences. Implementation of a dedicated coil for peripheral MRA at 3.0 Tesla helps to increase spatial resolution and to decrease acquisition time while the image quality could be kept equal. Venous overlay can be effectively avoided despite the use of high-resolution scans.
Axial Cone Beam Reconstruction by Weighted BPF/DBPF and Orthogonal Butterfly Filtering
Tang, Shaojie; Tang, Xiangyang
2016-01-01
Goal The backprojection-filtration (BPF) and the derivative backprojection filtered (DBPF) algorithms, in which Hilbert filtering is the common algorithmic feature, are originally derived for exact helical reconstruction from cone beam (CB) scan data and axial reconstruction from fan beam data, respectively. These two algorithms can be heuristically extended for image reconstruction from axial CB scan data, but induce severe artifacts in images located away from the central plane determined by the circular source trajectory. We propose an algorithmic solution herein to eliminate the artifacts. Methods The solution is an integration of three-dimensional (3D) weighted axial CB-BPF/ DBPF algorithm with orthogonal butterfly filtering, namely axial CB-BPF/DBPF cascaded with orthogonal butterfly filtering. Using the computer simulated Forbild head and thoracic phantoms that are rigorous in inspecting reconstruction accuracy and an anthropomorphic thoracic phantom with projection data acquired by a CT scanner, we evaluate performance of the proposed algorithm. Results Preliminary results show that the orthogonal butterfly filtering can eliminate the severe streak artifacts existing in the images reconstructed by the 3D weighted axial CB-BPF/DBPF algorithm located at off-central planes. Conclusion Integrated with orthogonal butterfly filtering, the 3D weighted CB-BPF/DBPF algorithm can perform at least as well as the 3D weighted CB-FBP algorithm in image reconstruction from axial CB scan data. Significance The proposed 3D weighted axial CB-BPF/DBPF cascaded with orthogonal butterfly filtering can be an algorithmic solution for CT imaging in extensive clinical and preclinical applications. PMID:26660512
The Evolution of Social and Semantic Networks in Epistemic Communities
ERIC Educational Resources Information Center
Margolin, Drew Berkley
2012-01-01
This study describes and tests a model of scientific inquiry as an evolving, organizational phenomenon. Arguments are derived from organizational ecology and evolutionary theory. The empirical subject of study is an "epistemic community" of scientists publishing on a research topic in physics: the string theoretic concept of…
ERIC Educational Resources Information Center
Gorayska, Barbara
1978-01-01
Techniques of teaching the English finite verb to speakers of other languages must account for meaning that is signalled by the structure alone and meaning derived from the context. Accordingly, this study attempts to distinguish the semantic components of the finite verb structure. The structure is viewed as being always composed of the following…
Aspects of the Internal Structure of Nominalization: Roots, Morphology and Derivation
ERIC Educational Resources Information Center
Punske, Jeffrey
2012-01-01
This dissertation uses syntactic, semantic and morphological evidence from English nominalization to probe the interaction of event-structure and syntax, develop a typology of structural complexity within nominalization, and test hypotheses about the strict ordering of functional items. I focus on the widely assumed typology of nominalization…
Lesion-Site Affects Grammatical Gender Assignment in German: Perception and Production Data
ERIC Educational Resources Information Center
Hofmann, Juliane; Kotz, Sonja A.; Marschhauser, Anke; von Cramon, D. Yves; Friederici, Angela D.
2007-01-01
Two experiments investigated phonological, derivational-morphological and semantic aspects of grammatical gender assignment in a perception and a production task in German aphasic patients and age-matched controls. The agreement of a gender indicating adjective (feminine, masculine or neuter) and a noun was evaluated during perception in…
Embodied Simulations Are Modulated by Sentential Perspective
ERIC Educational Resources Information Center
van Dam, Wessel O.; Desai, Rutvik H.
2017-01-01
There is considerable evidence that language comprehenders derive lexical-semantic meaning by mentally simulating perceptual and motor attributes of described events. However, the nature of these simulations--including the level of detail that is incorporated and contexts under which simulations occur--is not well understood. Here, we examine the…
Chang, Guoping; Chang, Tingting; Pan, Tinsu; Clark, John W; Mawlawi, Osama R
2010-12-01
Respiratory motion artifacts and partial volume effects (PVEs) are two degrading factors that affect the accuracy of image quantification in PET/CT imaging. In this article, the authors propose a joint motion and PVE correction approach (JMPC) to improve PET quantification by simultaneously correcting for respiratory motion artifacts and PVE in patients with lung/thoracic cancer. The objective of this article is to describe this approach and evaluate its performance using phantom and patient studies. The proposed joint correction approach incorporates a model of motion blurring, PVE, and object size/shape. A motion blurring kernel (MBK) is then estimated from the deconvolution of the joint model, while the activity concentration (AC) of the tumor is estimated from the normalization of the derived MBK. To evaluate the performance of this approach, two phantom studies and eight patient studies were performed. In the phantom studies, two motion waveforms-a linear sinusoidal and a circular motion-were used to control the motion of a sphere, while in the patient studies, all participants were instructed to breathe regularly. For the phantom studies, the resultant MBK was compared to the true MBK by measuring a correlation coefficient between the two kernels. The measured sphere AC derived from the proposed method was compared to the true AC as well as the ACs in images exhibiting PVE only and images exhibiting both PVE and motion blurring. For the patient studies, the resultant MBK was compared to the motion extent derived from a 4D-CT study, while the measured tumor AC was compared to the AC in images exhibiting both PVE and motion blurring. For the phantom studies, the estimated MBK approximated the true MBK with an average correlation coefficient of 0.91. The tumor ACs following the joint correction technique were similar to the true AC with an average difference of 2%. Furthermore, the tumor ACs on the PVE only images and images with both motion blur and PVE effects were, on average, 75% and 47.5% (10%) of the true AC, respectively, for the linear (circular) motion phantom study. For the patient studies, the maximum and mean AC/SUV on the PET images following the joint correction are, on average, increased by 125.9% and 371.6%, respectively, when compared to the PET images with both PVE and motion. The motion extents measured from the derived MBK and 4D-CT exhibited an average difference of 1.9 mm. The proposed joint correction approach can improve the accuracy of PET quantification by simultaneously compensating for the respiratory motion artifacts and PVE in lung/thoracic PET/CT imaging.
DOT National Transportation Integrated Search
2010-10-01
In this report, we study information propagation via inter-vehicle communication along two parallel : roads. By identifying an inherent Bernoulli process, we are able to derive the mean and variance of : propagation distance. A road separation distan...
NASA Giovanni: A Tool for Visualizing, Analyzing, and Inter-Comparing Soil Moisture Data
NASA Technical Reports Server (NTRS)
Teng, William; Rui, Hualan; Vollmer, Bruce; deJeu, Richard; Fang, Fan; Lei, Guang-Dih
2012-01-01
There are many existing satellite soil moisture algorithms and their derived data products, but there is no simple way for a user to inter-compare the products or analyze them together with other related data (e.g., precipitation). An environment that facilitates such inter-comparison and analysis would be useful for validation of satellite soil moisture retrievals against in situ data and for determining the relationships between different soil moisture products. The latter relationships are particularly important for applications users, for whom the continuity of soil moisture data, from whatever source, is critical. A recent example was provided by the sudden demise of EOS Aqua AMSR-E and the end of its soil moisture data production, as well as the end of other soil moisture products that had used the AMSR-E brightness temperature data. The purpose of the current effort is to create an environment, as part of the NASA Giovanni family of portals, that facilitates inter-comparisons of soil moisture algorithms and their derived data products.
Dialog detection in narrative video by shot and face analysis
NASA Astrophysics Data System (ADS)
Kroon, B.; Nesvadba, J.; Hanjalic, A.
2007-01-01
The proliferation of captured personal and broadcast content in personal consumer archives necessitates comfortable access to stored audiovisual content. Intuitive retrieval and navigation solutions require however a semantic level that cannot be reached by generic multimedia content analysis alone. A fusion with film grammar rules can help to boost the reliability significantly. The current paper describes the fusion of low-level content analysis cues including face parameters and inter-shot similarities to segment commercial content into film grammar rule-based entities and subsequently classify those sequences into so-called shot reverse shots, i.e. dialog sequences. Moreover shot reverse shot specific mid-level cues are analyzed augmenting the shot reverse shot information with dialog specific descriptions.
First Steps to Automated Interior Reconstruction from Semantically Enriched Point Clouds and Imagery
NASA Astrophysics Data System (ADS)
Obrock, L. S.; Gülch, E.
2018-05-01
The automated generation of a BIM-Model from sensor data is a huge challenge for the modeling of existing buildings. Currently the measurements and analyses are time consuming, allow little automation and require expensive equipment. We do lack an automated acquisition of semantical information of objects in a building. We are presenting first results of our approach based on imagery and derived products aiming at a more automated modeling of interior for a BIM building model. We examine the building parts and objects visible in the collected images using Deep Learning Methods based on Convolutional Neural Networks. For localization and classification of building parts we apply the FCN8s-Model for pixel-wise Semantic Segmentation. We, so far, reach a Pixel Accuracy of 77.2 % and a mean Intersection over Union of 44.2 %. We finally use the network for further reasoning on the images of the interior room. We combine the segmented images with the original images and use photogrammetric methods to produce a three-dimensional point cloud. We code the extracted object types as colours of the 3D-points. We thus are able to uniquely classify the points in three-dimensional space. We preliminary investigate a simple extraction method for colour and material of building parts. It is shown, that the combined images are very well suited to further extract more semantic information for the BIM-Model. With the presented methods we see a sound basis for further automation of acquisition and modeling of semantic and geometric information of interior rooms for a BIM-Model.
Modeling and formal representation of geospatial knowledge for the Geospatial Semantic Web
NASA Astrophysics Data System (ADS)
Huang, Hong; Gong, Jianya
2008-12-01
GML can only achieve geospatial interoperation at syntactic level. However, it is necessary to resolve difference of spatial cognition in the first place in most occasions, so ontology was introduced to describe geospatial information and services. But it is obviously difficult and improper to let users to find, match and compose services, especially in some occasions there are complicated business logics. Currently, with the gradual introduction of Semantic Web technology (e.g., OWL, SWRL), the focus of the interoperation of geospatial information has shifted from syntactic level to Semantic and even automatic, intelligent level. In this way, Geospatial Semantic Web (GSM) can be put forward as an augmentation to the Semantic Web that additionally includes geospatial abstractions as well as related reasoning, representation and query mechanisms. To advance the implementation of GSM, we first attempt to construct the mechanism of modeling and formal representation of geospatial knowledge, which are also two mostly foundational phases in knowledge engineering (KE). Our attitude in this paper is quite pragmatical: we argue that geospatial context is a formal model of the discriminate environment characters of geospatial knowledge, and the derivation, understanding and using of geospatial knowledge are located in geospatial context. Therefore, first, we put forward a primitive hierarchy of geospatial knowledge referencing first order logic, formal ontologies, rules and GML. Second, a metamodel of geospatial context is proposed and we use the modeling methods and representation languages of formal ontologies to process geospatial context. Thirdly, we extend Web Process Service (WPS) to be compatible with local DLL for geoprocessing and possess inference capability based on OWL.
Welker, Kirk M; De Jesus, Reordan O; Watson, Robert E; Machulda, Mary M; Jack, Clifford R
2012-10-01
To test the hypothesis that leukoaraiosis alters functional activation during a semantic decision language task. With institutional review board approval and written informed consent, 18 right-handed, cognitively healthy elderly participants with an aggregate leukoaraiosis lesion volume of more than 25 cm(3) and 18 age-matched control participants with less than 5 cm(3) of leukoaraiosis underwent functional MR imaging to allow comparison of activation during semantic decisions with that during visual perceptual decisions. Brain statistical maps were derived from the general linear model. Spatially normalized group t maps were created from individual contrast images. A cluster extent threshold of 215 voxels was used to correct for multiple comparisons. Intergroup random effects analysis was performed. Language laterality indexes were calculated for each participant. In control participants, semantic decisions activated the bilateral visual cortex, left posteroinferior temporal lobe, left posterior cingulate gyrus, left frontal lobe expressive language regions, and left basal ganglia. Visual perceptual decisions activated the right parietal and posterior temporal lobes. Participants with leukoaraiosis showed reduced activation in all regions associated with semantic decisions; however, activation associated with visual perceptual decisions increased in extent. Intergroup analysis showed significant activation decreases in the left anterior occipital lobe (P=.016), right posterior temporal lobe (P=.048), and right basal ganglia (P=.009) in particpants with leukoariosis. Individual participant laterality indexes showed a strong trend (P=.059) toward greater left lateralization in the leukoaraiosis group. Moderate leukoaraiosis is associated with atypical functional activation during semantic decision tasks. Consequently, leukoaraiosis is an important confounding variable in functional MR imaging studies of elderly individuals. © RSNA, 2012.
Xue, Jin; Liu, Tongtong; Marmolejo-Ramos, Fernando; Pei, Xuna
2017-01-01
The present study aimed at distinguishing processing of early learned L2 words from late ones for Chinese natives who learn English as a foreign language. Specifically, we examined whether the age of acquisition (AoA) effect arose during the arbitrary mapping from conceptual knowledge onto linguistic units. The behavior and ERP data were collected when 28 Chinese-English bilinguals were asked to perform semantic relatedness judgment on word pairs, which represented three stages of word learning (i.e., primary school, junior and senior high schools). A 3 (AoA: early vs. intermediate vs. late) × 2 (regularity: regular vs. irregular) × 2 (semantic relatedness: related vs. unrelated) × 2 (hemisphere: left vs. right) × 3 (brain area: anterior vs. central vs. posterior) within-subjects design was adopted. Results from the analysis of N100 and N400 amplitudes showed that early learned words had an advantage in processing accuracy and speed; there is a tendency that the AoA effect was more pronounced for irregular word pairs and in the semantic related condition. More important, ERP results showed early acquired words induced larger N100 amplitudes for early AoA words in the parietal area and more negative-going N400 than late acquire words in the frontal and central regions. The results indicate the locus of the AoA effect might derive from the arbitrary mapping between word forms and semantic concepts, and early acquired words have more semantic interconnections than late acquired words. PMID:28572785
Intrinsic functional network architecture of human semantic processing: Modules and hubs.
Xu, Yangwen; Lin, Qixiang; Han, Zaizhu; He, Yong; Bi, Yanchao
2016-05-15
Semantic processing entails the activation of widely distributed brain areas across the temporal, parietal, and frontal lobes. To understand the functional structure of this semantic system, we examined its intrinsic functional connectivity pattern using a database of 146 participants. Focusing on areas consistently activated during semantic processing generated from a meta-analysis of 120 neuroimaging studies (Binder et al., 2009), we found that these regions were organized into three stable modules corresponding to the default mode network (Module DMN), the left perisylvian network (Module PSN), and the left frontoparietal network (Module FPN). These three dissociable modules were integrated by multiple connector hubs-the left angular gyrus (AG) and the left superior/middle frontal gyrus linking all three modules, the left anterior temporal lobe linking Modules DMN and PSN, the left posterior portion of dorsal intraparietal sulcus (IPS) linking Modules DMN and FPN, and the left posterior middle temporal gyrus (MTG) linking Modules PSN and FPN. Provincial hubs, which converge local information within each system, were also identified: the bilateral posterior cingulate cortices/precuneus, the bilateral border area of the posterior AG and the superior lateral occipital gyrus for Module DMN; the left supramarginal gyrus, the middle part of the left MTG and the left orbital inferior frontal gyrus (IFG) for Module FPN; and the left triangular IFG and the left IPS for Module FPN. A neuro-functional model for semantic processing was derived based on these findings, incorporating the interactions of memory, language, and control. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Chen, Hsin-Chen; Lin, Chii-Jeng; Wu, Chia-Hsing; Wang, Chien-Kuo; Sun, Yung-Nien
2010-11-01
The Insall-Salvati ratio (ISR) is important for detecting two common clinical signs of knee disease: patella alta and patella baja. Furthermore, large inter-operator differences in ISR measurement make an objective measurement system necessary for better clinical evaluation. In this paper, we define three specific bony landmarks for determining the ISR and then propose an x-ray image analysis system to localize these landmarks and measure the ISR. Due to inherent artifacts in x-ray images, such as unevenly distributed intensities, which make landmark localization difficult, we hence propose a registration-assisted active-shape model (RAASM) to localize these landmarks. We first construct a statistical model from a set of training images based on x-ray image intensity and patella shape. Since a knee x-ray image contains specific anatomical structures, we then design an algorithm, based on edge tracing, for patella feature extraction in order to automatically align the model to the patella image. We can estimate the landmark locations as well as the ISR after registration-assisted model fitting. Our proposed method successfully overcomes drawbacks caused by x-ray image artifacts. Experimental results show great agreement between the ISRs measured by the proposed method and by orthopedic clinicians.
Nie, Jing; Mahato, Simpla; Zelhof, Andrew C
2015-02-03
Tissue fixation is crucial for preserving the morphology of biological structures and cytological details to prevent postmortem degradation and autolysis. Improper fixation conditions could lead to artifacts and thus incorrect conclusions in immunofluorescence or histology experiments. To resolve reported structural anomalies with respect to Drosophila photoreceptor cell organization we developed and utilized a combination of live imaging and fixed samples to investigate the exact biogenesis and to identify the underlying source for the reported discrepancies in structure. We found that piperazine-N,N'-bis(ethanesulfonic acid) (PIPES) and 4-(2-hydroxyethyl)-1-piperazineethanesulfonic acid (HEPES), two zwitterionic buffers commonly used in tissue fixation, can cause severe lumen and cell morphological defects in Drosophila pupal and adult retina; the inter-rhabdomeral lumen becomes dilated and the photoreceptor cells are significantly reduced in size. Correspondingly, the localization pattern of Eyes shut (EYS), a luminal protein, is severely altered. In contrast, tissues fixed in the phosphate buffered saline (PBS) buffer results in lumen and cell morphologies that are consistent with live imaging. We suggest that PIPES and HEPES buffers should be utilized with caution for fixation when examining the interplay between cells and their extracellular environment, especially in Drosophila pupal and adult retina research.
An Examination of the True Reliability of Lower Limb Stiffness Measures During Overground Hopping.
Diggin, David; Anderson, Ross; Harrison, Andrew J
2016-06-01
Evidence suggests reports describing the reliability of leg-spring (kleg) and joint stiffness (kjoint) measures are contaminated by artifacts originating from digital filtering procedures. In addition, the intraday reliability of kleg and kjoint requires investigation. This study examined the effects of experimental procedures on the inter- and intraday reliability of kleg and kjoint. Thirty-two participants completed 2 trials of single-legged hopping at 1.5, 2.2, and 3.0 Hz at the same time of day across 3 days. On the final test day a fourth experimental bout took place 6 hours before or after participants' typical testing time. Kinematic and kinetic data were collected throughout. Stiffness was calculated using models of kleg and kjoint. Classifications of measurement agreement were established using thresholds for absolute and relative reliability statistics. Results illustrated that kleg and kankle exhibited strong agreement. In contrast, kknee and khip demonstrated weak-to-moderate consistency. Results suggest limits in kjoint reliability persist despite employment of appropriate filtering procedures. Furthermore, diurnal fluctuations in lower-limb muscle-tendon stiffness exhibit little effect on intraday reliability. The present findings support the existence of kleg as an attractor state during hopping, achieved through fluctuations in kjoint variables. Limits to kjoint reliability appear to represent biological function rather than measurement artifact.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hinrichs, Jan B., E-mail: hinrichs.jan@mh-hannover.de; Marquardt, Steffen, E-mail: marquardt.steffen@mh-hannover.de; Falck, Christian von, E-mail: falck.christian.von@mh-hannover.de
PurposeTo assess the feasibility and diagnostic performance of contrast-enhanced, C-arm computed tomography (CACT) of the pulmonary arteries compared to digital subtraction angiography (DSA) in patients suffering from chronic thromboembolic pulmonary hypertension (CTEPH).MaterialsFifty-two patients with CTEPH underwent ECG-gated DSA and contrast-enhanced CACT. Two readers (R1, R2) independently evaluated pulmonary artery segments and their sub-segmental branching using DSA and CACT for optimal image quality. Afterwards, the diagnostic findings, i.e., intraluminal filling defects, stenosis, and occlusion, were compared. Inter-modality and inter-observer agreement was calculated, and subsequently consensus reading was done and correlated to a reference standard representing the overall consensus of both modalities.more » Fisher’s exact test and Cohen’s Kappa were applied.ResultsA total of 1352 pulmonary segments were evaluated, of which 1255 (92.8 %) on DSA and 1256 (92.9 %) on CACT were rated to be fully diagnostic. The main causes of the non-diagnostic image quality were motion artifacts on CACT (R1:37, R2:78) and insufficient contrast enhancement on DSA (R1:59, R2:38). Inter-observer agreement was good for DSA (κ = 0.74) and CACT (κ = 0.75), while inter-modality agreement was moderate (R1: κ = 0.46, R2: κ = 0.47). Compared to the reference standard, the inter-modality agreement for CACT was excellent (κ = 0.96), whereas it was inferior for DSA (κ = 0.61) due to the higher number of abnormal consensus findings read as normal on DSA.ConclusionCACT of the pulmonary arteries is feasible and provides additional information to DSA. CACT has the potential to improve the diagnostic work-up of patients with CTEPH and may be particularly useful prior to surgical or interventional treatment.« less
Facilitation and refractoriness of the electrically evoked compound action potential.
Hey, Matthias; Müller-Deile, Joachim; Hessel, Horst; Killian, Matthijs
2017-11-01
In this study we aim to resolve the contributions of facilitation and refractoriness at very short pulse intervals. Measurements of the refractory properties of the electrically evoked compound action potential (ECAP) of the auditory nerve in cochlear implant (CI) users at inter pulse intervals below 300 μs are influenced by facilitation and recovery effects. ECAPs were recorded using masker pulses with a wide range of current levels relative to the probe pulse levels, for three suprathreshold probe levels and pulse intervals from 13 to 200 μs. Evoked potentials were measured for 21 CI patients by using the masked response extraction artifact cancellation procedure. During analysis of the measurements the stimulation current was not used as absolute value, but in relation to the patient's individual ECAP threshold. This enabled a more general approach to describe facilitation as a probe level independent effect. Maximum facilitation was found for all tested inter pulse intervals at masker levels near patient's individual ECAP threshold, independent from probe level. For short inter pulse intervals an increased N 1 P 1 amplitude was measured for subthreshold masker levels down to 120 CL below patient's individual ECAP threshold in contrast to the recreated state. ECAPs recorded with inter pulse intervals up to 200 μs are influenced by facilitation and recovery. Facilitation effects are most pronounced for masker levels at or below ECAP threshold, while recovery effects increase with higher masker levels above ECAP threshold. The local maximum of the ECAP amplitude for masker levels around ECAP threshold can be explained by the mutual influence of maximum facilitation and minimal refractoriness. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
Inter- and intramolecular epitope spreading in equine recurrent uveitis.
Deeg, Cornelia A; Amann, Barbara; Raith, Albert J; Kaspers, Bernd
2006-02-01
To test the hypothesis that inter- and intramolecular spreading to S-antigen (S-Ag) and interphotoreceptor retinoid binding protein (IRBP)-derived epitopes occurs in a spontaneous model of recurrent uveitis in the horse. The immune response of eight horses with equine recurrent uveitis (ERU) was compared with that of five control horses with healthy eyes. Lymphocytes derived from peripheral blood (PBLs) were tested every 8 weeks for their reactivity against S-Ag and various S-Ag and IRBP-derived peptides for 12 to 39 months (median, 22 months). During uveitic episodes, additional blood samples were analyzed. Intermolecular epitope spreading was detectable in all ERU cases during the study. Intramolecular spreading occurred in seven (of eight) horses with ERU. Fourteen relapses were analyzed during the observation period. Ten uveitic episodes were accompanied by neoreactivity to S-Ag or IRBP-derived peptides during the relapse. Shifts in the immune response profile were also detectable without any clinical signs of inflammation. Eye-healthy control horses were negative at all time points in the in vitro proliferation assays. Inter- and intramolecular spreading was detectable in a spontaneous model of recurrent uveitis. The shifts in immunoreactivity could account for the remitting-relapsing character of the disease.
Montaux-Lambert, Antoine; Mercère, Pascal; Primot, Jérôme
2015-11-02
An interferogram conditioning procedure, for subsequent phase retrieval by Fourier demodulation, is presented here as a fast iterative approach aiming at fulfilling the classical boundary conditions imposed by Fourier transform techniques. Interference fringe patterns with typical edge discontinuities were simulated in order to reveal the edge artifacts that classically appear in traditional Fourier analysis, and were consecutively used to demonstrate the correction efficiency of the proposed conditioning technique. Optimization of the algorithm parameters is also presented and discussed. Finally, the procedure was applied to grating-based interferometric measurements performed in the hard X-ray regime. The proposed algorithm enables nearly edge-artifact-free retrieval of the phase derivatives. A similar enhancement of the retrieved absorption and fringe visibility images is also achieved.
Influence of orthodontic appliance-derived artifacts on 3-T MRI movies.
Ozawa, Erika; Honda, Ei-Ichi; Parakonthun, Kulthida Nunthayanon; Ohmori, Hiroko; Shimazaki, Kazuo; Kurabayashi, Tohru; Ono, Takashi
2018-02-19
Magnetic resonance imaging (MRI) has been used to study configurations of speech organs in the resting state. However, MRI is sensitive to metals, and numerous types of metallic appliances, most of which have a large magnetic susceptibility, are used in orthodontic treatment and may cause severe artifacts on MRI. We have developed techniques for obtaining MRI movies of the oral region, to evaluate articulatory changes, especially movement of the tongue, palate, and teeth, pre- and post-orthodontic/orthognathic treatment. We evaluated the influence of artifacts caused by orthodontic appliances, including fixed retainers, metal brackets, and wires, on measurements in 3-T MRI movies. Sixteen healthy young adults (nine males, seven females; average age, 27 years) with normal occlusion were recruited. Four types of customized maxillary and mandibular plates were prepared by incorporating one of the following into the plate: (a) nothing, (b) a fixed canine-to-canine retainer, (c) metal brackets for the anterior and molar teeth, or (d) clear brackets for the anterior teeth and metal brackets for molars. A 3-T MRI movie, in segmented cine mode, was generated for each plate condition while participants pronounced a vowel-consonant-vowel syllable (/asa/). The size of the artifact due to the metallic brackets was measured. The face size and orthodontically important anatomical structures, such as the velum, the hard palate, and the laryngeal ventricle, were also measured. A large artifact was observed over the entire oral region around orthodontic appliances, altering regional visibility. The velopharyngeal height was measured as significantly longer in the presence of metal brackets. The maximum artifact size due to a metallic bracket was > 8 cm. Our results show that even if it is possible to obtain the measurements of palate length, nasion to sella, and nasion to basion in individuals wearing metal brackets for molars, the measurements might be affected due to the presence of artifacts. Orthodontic appliances, including metallic materials, sometimes produce significant measurement error in speech evaluation using MRI movies, which often become invisible or distorted by metallic orthodontic appliances. When the distorted image is measured, caution should be exercised, as the measurement may be affected. Based on the study, it is concluded that orthodontists should not necessarily remove all metallic appliances before MRI examination because the influence varies among the appliances and should also know that a significant measurement error in speech evaluation using MRI movie may occur by image distortion caused by metallic artifacts.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lewis, David F., E-mail: rcfilmconsulting@gmail.com; Chan, Maria F.
Purpose: The new radiochromic film, GAFChromic EBT-XD, contains the same active material, lithium-10,12-pentacosadiynoate, as GAFChromic EBT3, but the crystalline form is different. This work investigates the effect of this change on the well-known lateral response artifact when EBT-XD film is digitized on a flatbed scanner. Methods: The dose response of a single production lot of EBT-XD was characterized by scanning an unexposed film plus a set of films exposed to doses between 2.5 and 50 Gy using 6 MV photons. To characterize the lateral response artifact, the authors used the unexposed film plus a subset of samples exposed to dosesmore » between 20 and 50 Gy. Digital images of these films were acquired at seven discrete lateral locations perpendicular to the scan direction on three Epson 10000XL scanners. Using measurements at the discrete lateral positions, the scanner responses were determined as a function of the lateral position of the film. From the data for each scanner, a set of coefficients were derived whereby measured response values could be corrected to remove the effects of the lateral response artifact. The EBT-XD data were analyzed as in their previous work and compared to results reported for EBT3 in that paper. Results: For films scanned in the same orientation and having equal responses, the authors found that the lateral response artifact for EBT-XD and EBT3 films was remarkably similar. For both films, the artifact increases with increased net response. However, as EBT-XD is less sensitive than EBT3, a greater exposure dose is required to reach the same net response. On this basis, the lower sensitivity of EBT-XD relative to EBT3 results in less net response change for equal exposure and a reduction in the impact of the lateral response artifact. Conclusions: The shape of the crystalline active component in EBT-XD and EBT3 does not affect the fundamental existence of the lateral response artifact when the films are digitized on flatbed scanners. Owing its lower sensitivity, EBT-XD film requires higher dose to reach the same response as EBT3, resulting in lesser impact of the lateral response artifact. For doses >10 Gy, the slopes of the EBT-XD red and green channel dose response curves are greater than the corresponding ones for EBT3. For these two reasons, the authors prefer EBT-XD for doses exceeding about 10 Gy.« less
Zhang, Lingli; Zeng, Li; Guo, Yumeng
2018-01-01
Restricted by the scanning environment in some CT imaging modalities, the acquired projection data are usually incomplete, which may lead to a limited-angle reconstruction problem. Thus, image quality usually suffers from the slope artifacts. The objective of this study is to first investigate the distorted domains of the reconstructed images which encounter the slope artifacts and then present a new iterative reconstruction method to address the limited-angle X-ray CT reconstruction problem. The presented framework of new method exploits the structural similarity between the prior image and the reconstructed image aiming to compensate the distorted edges. Specifically, the new method utilizes l0 regularization and wavelet tight framelets to suppress the slope artifacts and pursue the sparsity. New method includes following 4 steps to (1) address the data fidelity using SART; (2) compensate for the slope artifacts due to the missed projection data using the prior image and modified nonlocal means (PNLM); (3) utilize l0 regularization to suppress the slope artifacts and pursue the sparsity of wavelet coefficients of the transformed image by using iterative hard thresholding (l0W); and (4) apply an inverse wavelet transform to reconstruct image. In summary, this method is referred to as "l0W-PNLM". Numerical implementations showed that the presented l0W-PNLM was superior to suppress the slope artifacts while preserving the edges of some features as compared to the commercial and other popular investigative algorithms. When the image to be reconstructed is inconsistent with the prior image, the new method can avoid or minimize the distorted edges in the reconstructed images. Quantitative assessments also showed that applying the new method obtained the highest image quality comparing to the existing algorithms. This study demonstrated that the presented l0W-PNLM yielded higher image quality due to a number of unique characteristics, which include that (1) it utilizes the structural similarity between the reconstructed image and prior image to modify the distorted edges by slope artifacts; (2) it adopts wavelet tight frames to obtain the first and high derivative in several directions and levels; and (3) it takes advantage of l0 regularization to promote the sparsity of wavelet coefficients, which is effective for the inhibition of the slope artifacts. Therefore, the new method can address the limited-angle CT reconstruction problem effectively and have practical significance.
Applied photo interpretation for airbrush cartography
NASA Technical Reports Server (NTRS)
Inge, J. L.; Bridges, P. M.
1976-01-01
New techniques of cartographic portrayal have been developed for the compilation of maps of lunar and planetary surfaces. Conventional photo interpretation methods utilizing size, shape, shadow, tone, pattern, and texture are applied to computer processed satellite television images. The variety of the image data allows the illustrator to interpret image details by inter-comparison and intra-comparison of photographs. Comparative judgements are affected by illumination, resolution, variations in surface coloration, and transmission or processing artifacts. The validity of the interpretation process is tested by making a representational drawing by an airbrush portrayal technique. Production controls insure the consistency of a map series. Photo interpretive cartographic portrayal skills are used to prepare two kinds of map series and are adaptable to map products of different kinds and purposes.
Image-based red cell counting for wild animals blood.
Mauricio, Claudio R M; Schneider, Fabio K; Dos Santos, Leonilda Correia
2010-01-01
An image-based red blood cell (RBC) automatic counting system is presented for wild animals blood analysis. Images with 2048×1536-pixel resolution acquired on an optical microscope using Neubauer chambers are used to evaluate RBC counting for three animal species (Leopardus pardalis, Cebus apella and Nasua nasua) and the error found using the proposed method is similar to that obtained for inter observer visual counting method, i.e., around 10%. Smaller errors (e.g., 3%) can be obtained in regions with less grid artifacts. These promising results allow the use of the proposed method either as a complete automatic counting tool in laboratories for wild animal's blood analysis or as a first counting stage in a semi-automatic counting tool.
The perturbed Sparre Andersen model with a threshold dividend strategy
NASA Astrophysics Data System (ADS)
Gao, Heli; Yin, Chuancun
2008-10-01
In this paper, we consider a Sparre Andersen model perturbed by diffusion with generalized Erlang(n)-distributed inter-claim times and a threshold dividend strategy. Integro-differential equations with certain boundary conditions for the moment-generation function and the mth moment of the present value of all dividends until ruin are derived. We also derive integro-differential equations with boundary conditions for the Gerber-Shiu functions. The special case where the inter-claim times are Erlang(2) distributed and the claim size distribution is exponential is considered in some details.
Tan, Zaiyou; Luo, Lin; Zhu, Erjia; Yan, Ruisi; Lin, Zhuohui
2010-01-01
The title compound, C18H23NO3, the adamantane derivative of norcantharidin, which is itself derived from cantharidin, crystallized with three independent molecules in the asymmetric unit. In the crystal, molecules are linked by intermolecular C—H⋯O interactions, leading to the formation of a supramolecular two-dimensional network. PMID:21579455
NASA Astrophysics Data System (ADS)
Bikakis, Nikos; Gioldasis, Nektarios; Tsinaraki, Chrisa; Christodoulakis, Stavros
SPARQL is today the standard access language for Semantic Web data. In the recent years XML databases have also acquired industrial importance due to the widespread applicability of XML in the Web. In this paper we present a framework that bridges the heterogeneity gap and creates an interoperable environment where SPARQL queries are used to access XML databases. Our approach assumes that fairly generic mappings between ontology constructs and XML Schema constructs have been automatically derived or manually specified. The mappings are used to automatically translate SPARQL queries to semantically equivalent XQuery queries which are used to access the XML databases. We present the algorithms and the implementation of SPARQL2XQuery framework, which is used for answering SPARQL queries over XML databases.
Tagare, Hemant D.; Jaffe, C. Carl; Duncan, James
1997-01-01
Abstract Information contained in medical images differs considerably from that residing in alphanumeric format. The difference can be attributed to four characteristics: (1) the semantics of medical knowledge extractable from images is imprecise; (2) image information contains form and spatial data, which are not expressible in conventional language; (3) a large part of image information is geometric; (4) diagnostic inferences derived from images rest on an incomplete, continuously evolving model of normality. This paper explores the differentiating characteristics of text versus images and their impact on design of a medical image database intended to allow content-based indexing and retrieval. One strategy for implementing medical image databases is presented, which employs object-oriented iconic queries, semantics by association with prototypes, and a generic schema. PMID:9147338
Semantic False Memories in the Form of Derived Relational Intrusions Following Training
ERIC Educational Resources Information Center
Guinther, Paul M.; Dougher, Michael J.
2010-01-01
Contemporary behavior analytic research is making headway in characterizing memory phenomena that typically have been characterized by cognitive models, and the current study extends this development by producing "false memories" in the form of functional equivalence responding. A match-to-sample training procedure was administered in order to…
Keep Listening: Grammatical Context Reduces but Does Not Eliminate Activation of Unexpected Words
ERIC Educational Resources Information Center
Strand, Julia F.; Brown, Violet A.; Brown, Hunter E.; Berg, Jeffrey J.
2018-01-01
To understand spoken language, listeners combine acoustic-phonetic input with expectations derived from context (Dahan & Magnuson, 2006). Eye-tracking studies on semantic context have demonstrated that the activation levels of competing lexical candidates depend on the relative strengths of the bottom-up input and top-down expectations (cf.…
Inferring Metadata for a Semantic Web Peer-to-Peer Environment
ERIC Educational Resources Information Center
Brase, Jan; Painter, Mark
2004-01-01
Learning Objects Metadata (LOM) aims at describing educational resources in order to allow better reusability and retrieval. In this article we show how additional inference rules allows us to derive additional metadata from existing ones. Additionally, using these rules as integrity constraints helps us to define the constraints on LOM elements,…
ERIC Educational Resources Information Center
Amsel, Ben D.
2011-01-01
Empirically derived semantic feature norms categorized into different types of knowledge (e.g., visual, functional, auditory) can be summed to create number-of-feature counts per knowledge type. Initial evidence suggests several such knowledge types may be recruited during language comprehension. The present study provides a more detailed…
Kumar, Anand; Ciccarese, Paolo; Quaglini, Silvana; Stefanelli, Mario; Caffi, Ezio; Boiocchi, Lorenzo
2003-01-01
Medical knowledge in clinical practice guideline (GL) texts is the source of task-based computer-interpretable clinical guideline models (CIGMs). We have used Unified Medical Language System (UMLS) semantic types (STs) to understand the percentage of GL text which belongs to a particular ST. We also use UMLS semantic network together with the CIGM-specific ontology to derive a semantic meaning behind the GL text. In order to achieve this objective, we took nine GL texts from the National Guideline Clearinghouse (NGC) and marked up the text dealing with a particular ST. The STs we took into consideration were restricted taking into account the requirements of a task-based CIGM. We used DARPA Agent Markup Language and Ontology Inference Layer (DAML + OIL) to create the UMLS and CIGM specific semantic network. For the latter, as a bench test, we used the 1999 WHO-International Society of Hypertension Guidelines for the Management of Hypertension. We took into consideration the UMLS STs closest to the clinical tasks. The percentage of the GL text dealing with the ST "Health Care Activity" and subtypes "Laboratory Procedure", "Diagnostic Procedure" and "Therapeutic or Preventive Procedure" were measured. The parts of text belonging to other STs or comments were separated. A mapping of terms belonging to other STs was done to the STs under "HCA" for representation in DAML + OIL. As a result, we found that the three STs under "HCA" were the predominant STs present in the GL text. In cases where the terms of related STs existed, they were mapped into one of the three STs. The DAML + OIL representation was able to describe the hierarchy in task-based CIGMs. To conclude, we understood that the three STs could be used to represent the semantic network of the task-bases CIGMs. We identified some mapping operators which could be used for the mapping of other STs into these.
Tao, Cui; Jiang, Guoqian; Oniki, Thomas A; Freimuth, Robert R; Zhu, Qian; Sharma, Deepak; Pathak, Jyotishman; Huff, Stanley M; Chute, Christopher G
2013-05-01
The clinical element model (CEM) is an information model designed for representing clinical information in electronic health records (EHR) systems across organizations. The current representation of CEMs does not support formal semantic definitions and therefore it is not possible to perform reasoning and consistency checking on derived models. This paper introduces our efforts to represent the CEM specification using the Web Ontology Language (OWL). The CEM-OWL representation connects the CEM content with the Semantic Web environment, which provides authoring, reasoning, and querying tools. This work may also facilitate the harmonization of the CEMs with domain knowledge represented in terminology models as well as other clinical information models such as the openEHR archetype model. We have created the CEM-OWL meta ontology based on the CEM specification. A convertor has been implemented in Java to automatically translate detailed CEMs from XML to OWL. A panel evaluation has been conducted, and the results show that the OWL modeling can faithfully represent the CEM specification and represent patient data.
NASA Astrophysics Data System (ADS)
Poux, F.; Neuville, R.; Hallot, P.; Van Wersch, L.; Luczfalvy Jancsó, A.; Billen, R.
2017-05-01
While virtual copies of the real world tend to be created faster than ever through point clouds and derivatives, their working proficiency by all professionals' demands adapted tools to facilitate knowledge dissemination. Digital investigations are changing the way cultural heritage researchers, archaeologists, and curators work and collaborate to progressively aggregate expertise through one common platform. In this paper, we present a web application in a WebGL framework accessible on any HTML5-compatible browser. It allows real time point cloud exploration of the mosaics in the Oratory of Germigny-des-Prés, and emphasises the ease of use as well as performances. Our reasoning engine is constructed over a semantically rich point cloud data structure, where metadata has been injected a priori. We developed a tool that directly allows semantic extraction and visualisation of pertinent information for the end users. It leads to efficient communication between actors by proposing optimal 3D viewpoints as a basis on which interactions can grow.
Tao, Cui; Jiang, Guoqian; Oniki, Thomas A; Freimuth, Robert R; Zhu, Qian; Sharma, Deepak; Pathak, Jyotishman; Huff, Stanley M; Chute, Christopher G
2013-01-01
The clinical element model (CEM) is an information model designed for representing clinical information in electronic health records (EHR) systems across organizations. The current representation of CEMs does not support formal semantic definitions and therefore it is not possible to perform reasoning and consistency checking on derived models. This paper introduces our efforts to represent the CEM specification using the Web Ontology Language (OWL). The CEM-OWL representation connects the CEM content with the Semantic Web environment, which provides authoring, reasoning, and querying tools. This work may also facilitate the harmonization of the CEMs with domain knowledge represented in terminology models as well as other clinical information models such as the openEHR archetype model. We have created the CEM-OWL meta ontology based on the CEM specification. A convertor has been implemented in Java to automatically translate detailed CEMs from XML to OWL. A panel evaluation has been conducted, and the results show that the OWL modeling can faithfully represent the CEM specification and represent patient data. PMID:23268487
Functional dissociations in top-down control dependent neural repetition priming.
Klaver, Peter; Schnaidt, Malte; Fell, Jürgen; Ruhlmann, Jürgen; Elger, Christian E; Fernández, Guillén
2007-02-15
Little is known about the neural mechanisms underlying top-down control of repetition priming. Here, we use functional brain imaging to investigate these mechanisms. Study and repetition tasks used a natural/man-made forced choice task. In the study phase subjects were required to respond to either pictures or words that were presented superimposed on each other. In the repetition phase only words were presented that were new, previously attended or ignored, or picture names that were derived from previously attended or ignored pictures. Relative to new words we found repetition priming for previously attended words. Previously ignored words showed a reduced priming effect, and there was no significant priming for pictures repeated as picture names. Brain imaging data showed that neural priming of words in the left prefrontal cortex (LIPFC) and left fusiform gyrus (LOTC) was affected by attention, semantic compatibility of superimposed stimuli during study and cross-modal priming. Neural priming reduced for words in the LIPFC and for words and pictures in the LOTC if stimuli were previously ignored. Previously ignored words that were semantically incompatible with a superimposed picture during study induce increased neural priming compared to semantically compatible ignored words (LIPFC) and decreased neural priming of previously attended pictures (LOTC). In summary, top-down control induces dissociable effects on neural priming by attention, cross-modal priming and semantic compatibility in a way that was not evident from behavioral results.
NASA Astrophysics Data System (ADS)
Sun, Yu; Hu, Sijung; Azorin-Peris, Vicente; Greenwald, Stephen; Chambers, Jonathon; Zhu, Yisheng
2011-07-01
With the advance of computer and photonics technology, imaging photoplethysmography [(PPG), iPPG] can provide comfortable and comprehensive assessment over a wide range of anatomical locations. However, motion artifact is a major drawback in current iPPG systems, particularly in the context of clinical assessment. To overcome this issue, a new artifact-reduction method consisting of planar motion compensation and blind source separation is introduced in this study. The performance of the iPPG system was evaluated through the measurement of cardiac pulse in the hand from 12 subjects before and after 5 min of cycling exercise. Also, a 12-min continuous recording protocol consisting of repeated exercises was taken from a single volunteer. The physiological parameters (i.e., heart rate, respiration rate), derived from the images captured by the iPPG system, exhibit functional characteristics comparable to conventional contact PPG sensors. Continuous recordings from the iPPG system reveal that heart and respiration rates can be successfully tracked with the artifact reduction method even in high-intensity physical exercise situations. The outcome from this study thereby leads to a new avenue for noncontact sensing of vital signs and remote physiological assessment, with clear applications in triage and sports training.
Wan, Shibiao; Mak, Man-Wai; Kung, Sun-Yuan
2014-01-01
Protein subcellular localization prediction, as an essential step to elucidate the functions in vivo of proteins and identify drugs targets, has been extensively studied in previous decades. Instead of only determining subcellular localization of single-label proteins, recent studies have focused on predicting both single- and multi-location proteins. Computational methods based on Gene Ontology (GO) have been demonstrated to be superior to methods based on other features. However, existing GO-based methods focus on the occurrences of GO terms and disregard their relationships. This paper proposes a multi-label subcellular-localization predictor, namely HybridGO-Loc, that leverages not only the GO term occurrences but also the inter-term relationships. This is achieved by hybridizing the GO frequencies of occurrences and the semantic similarity between GO terms. Given a protein, a set of GO terms are retrieved by searching against the gene ontology database, using the accession numbers of homologous proteins obtained via BLAST search as the keys. The frequency of GO occurrences and semantic similarity (SS) between GO terms are used to formulate frequency vectors and semantic similarity vectors, respectively, which are subsequently hybridized to construct fusion vectors. An adaptive-decision based multi-label support vector machine (SVM) classifier is proposed to classify the fusion vectors. Experimental results based on recent benchmark datasets and a new dataset containing novel proteins show that the proposed hybrid-feature predictor significantly outperforms predictors based on individual GO features as well as other state-of-the-art predictors. For readers' convenience, the HybridGO-Loc server, which is for predicting virus or plant proteins, is available online at http://bioinfo.eie.polyu.edu.hk/HybridGoServer/.
Modelling and approaching pragmatic interoperability of distributed geoscience data
NASA Astrophysics Data System (ADS)
Ma, Xiaogang
2010-05-01
Interoperability of geodata, which is essential for sharing information and discovering insights within a cyberinfrastructure, is receiving increasing attention. A key requirement of interoperability in the context of geodata sharing is that data provided by local sources can be accessed, decoded, understood and appropriately used by external users. Various researchers have discussed that there are four levels in data interoperability issues: system, syntax, schematics and semantics, which respectively relate to the platform, encoding, structure and meaning of geodata. Ontology-driven approaches have been significantly studied addressing schematic and semantic interoperability issues of geodata in the last decade. There are different types, e.g. top-level ontologies, domain ontologies and application ontologies and display forms, e.g. glossaries, thesauri, conceptual schemas and logical theories. Many geodata providers are maintaining their identified local application ontologies in order to drive standardization in local databases. However, semantic heterogeneities often exist between these local ontologies, even though they are derived from equivalent disciplines. In contrast, common ontologies are being studied in different geoscience disciplines (e.g., NAMD, SWEET, etc.) as a standardization procedure to coordinate diverse local ontologies. Semantic mediation, e.g. mapping between local ontologies, or mapping local ontologies to common ontologies, has been studied as an effective way of achieving semantic interoperability between local ontologies thus reconciling semantic heterogeneities in multi-source geodata. Nevertheless, confusion still exists in the research field of semantic interoperability. One problem is caused by eliminating elements of local pragmatic contexts in semantic mediation. Comparing to the context-independent feature of a common domain ontology, local application ontologies are closely related to elements (e.g., people, time, location, intention, procedure, consequence, etc.) of local pragmatic contexts and thus context-dependent. Elimination of these elements will inevitably lead to information loss in semantic mediation between local ontologies. Correspondingly, understanding and effect of exchanged data in a new context may differ from that in its original context. Another problem is the dilemma on how to find a balance between flexibility and standardization of local ontologies, because ontologies are not fixed, but continuously evolving. It is commonly realized that we cannot use a unified ontology to replace all local ontologies because they are context-dependent and need flexibility. However, without coordination of standards, freely developed local ontologies and databases will bring enormous work of mediation between them. Finding a balance between standardization and flexibility for evolving ontologies, in a practical sense, requires negotiations (i.e. conversations, agreements and collaborations) between different local pragmatic contexts. The purpose of this work is to set up a computer-friendly model representing local pragmatic contexts (i.e. geodata sources), and propose a practical semantic negotiation procedure for approaching pragmatic interoperability between local pragmatic contexts. Information agents, objective facts and subjective dimensions are reviewed as elements of a conceptual model for representing pragmatic contexts. The author uses them to draw a practical semantic negotiation procedure approaching pragmatic interoperability of distributed geodata. The proposed conceptual model and semantic negotiation procedure were encoded with Description Logic, and then applied to analyze and manipulate semantic negotiations between different local ontologies within the National Mineral Resources Assessment (NMRA) project of China, which involves multi-source and multi-subject geodata sharing.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Meng, E-mail: mengwu@stanford.edu; Fahrig, Rebecca
2014-11-01
Purpose: The scanning beam digital x-ray system (SBDX) is an inverse geometry fluoroscopic system with high dose efficiency and the ability to perform continuous real-time tomosynthesis in multiple planes. This system could be used for image guidance during lung nodule biopsy. However, the reconstructed images suffer from strong out-of-plane artifact due to the small tomographic angle of the system. Methods: The authors propose an out-of-plane artifact subtraction tomosynthesis (OPAST) algorithm that utilizes a prior CT volume to augment the run-time image processing. A blur-and-add (BAA) analytical model, derived from the project-to-backproject physical model, permits the generation of tomosynthesis images thatmore » are a good approximation to the shift-and-add (SAA) reconstructed image. A computationally practical algorithm is proposed to simulate images and out-of-plane artifacts from patient-specific prior CT volumes using the BAA model. A 3D image registration algorithm to align the simulated and reconstructed images is described. The accuracy of the BAA analytical model and the OPAST algorithm was evaluated using three lung cancer patients’ CT data. The OPAST and image registration algorithms were also tested with added nonrigid respiratory motions. Results: Image similarity measurements, including the correlation coefficient, mean squared error, and structural similarity index, indicated that the BAA model is very accurate in simulating the SAA images from the prior CT for the SBDX system. The shift-variant effect of the BAA model can be ignored when the shifts between SBDX images and CT volumes are within ±10 mm in the x and y directions. The nodule visibility and depth resolution are improved by subtracting simulated artifacts from the reconstructions. The image registration and OPAST are robust in the presence of added respiratory motions. The dominant artifacts in the subtraction images are caused by the mismatches between the real object and the prior CT volume. Conclusions: Their proposed prior CT-augmented OPAST reconstruction algorithm improves lung nodule visibility and depth resolution for the SBDX system.« less
Liebl, Hans; Heilmeier, Ursula; Lee, Sonia; Nardo, Lorenzo; Patsch, Janina; Schuppert, Christopher; Han, Misung; Rondak, Ina-Christine; Banerjee, Suchandrima; Koch, Kevin; Link, Thomas M.; Krug, Roland
2014-01-01
PURPOSE To assess lesion detection and artifact size reduction of a MAVRIC-SEMAC hybrid sequence (MAVRIC-SL) compared to standard sequences at 1.5T and 3T in porcine knee specimens with metal hardware. METHODS Artificial cartilage and bone lesions of defined size were created in the proximity of titanium and steel screws with 2.5 mm diameter in 12 porcine knee specimens and were imaged at 1.5T and 3T MRI with MAVRIC-SL PD and STIR, standard FSE T2 PD and STIR and fat-saturated T2 FSE sequences. Three radiologists blinded to the lesion locations assessed lesion detection rates on randomized images for each sequence using ROC. Artifact length and width were measured. RESULTS Metal artifact sizes were largest in the presence of steel screws at 3T (FSE T2 FS: 28.7cm2) and 1.5T (16.03cm2). MAVRIC-SL PD and STIR reduced artifact sizes at both 3T (1.43cm2; 2.46cm2) and 1.5T (1.16cm2; 1.59cm2) compared to FS T2 FSE sequences (27.57cm2; 13.20cm2). At 3T, ROC derived AUC values using MAVRIC-SL sequences were significantly higher compared to standard sequences (MAVRIC-PD: 0.87, versus FSE-T2-FS: 0.73 (p=0.025); MAVRIC- STIR: 0.9 versus T2-STIR: 0.78 (p=0.001) and versus FSE-T2-FS: 0.73 (p=0.026)). Similar values were observed at 1.5T. Comparison of 3T and 1.5T showed no significant differences (MAVRIC-SL PD: p=0.382; MAVRIC-SL STIR: p=0.071. CONCLUSION MAVRIC-SL sequences provided superior lesion detection and reduced metal artifact size at both 1.5T and 3T compared to conventionally used FSE sequences. No significant disadvantage was found comparing MAVRIC-SL at 3T and 1.5T, though metal artifacts at 3T were larger. PMID:24912802
k-t SENSE-accelerated Myocardial Perfusion MR Imaging at 3.0 Tesla - comparison with 1.5 Tesla
Plein, Sven; Schwitter, Juerg; Suerder, Daniel; Greenwood, John P.; Boesiger, Peter; Kozerke, Sebastian
2008-01-01
Purpose To determine the feasibility and diagnostic accuracy of high spatial resolution myocardial perfusion MR at 3.0 Tesla using k-space and time domain undersampling with sensitivity encoding (k-t SENSE). Materials and Methods The study was reviewed and approved by the local ethic review board. k-t SENSE perfusion MR was performed at 1.5 Tesla and 3.0 Tesla (saturation recovery gradient echo pulse sequence, repetition time/echo time 3.0ms/1.0ms, flip angle 15°, 5x k-t SENSE acceleration, spatial resolution 1.3×1.3×10mm3). Fourteen volunteers were studied at rest and 37 patients during adenosine stress. In volunteers, comparison was also made with standard-resolution (2.5×2.5×10mm3) 2x SENSE perfusion MR at 3.0 Tesla. Image quality, artifact scores, signal-to-noise ratios (SNR) and contrast-enhancement ratios (CER) were derived. In patients, diagnostic accuracy of visual analysis to detect >50% diameter stenosis on quantitative coronary angiography was determined by receiver-operator-characteristics (ROC). Results In volunteers, image quality and artifact scores were similar for 3.0 Tesla and 1.5 Tesla, while SNR was higher (11.6 vs. 5.6) and CER lower (1.1 vs. 1.5, p=0.012) at 3.0 Tesla. Compared with standard-resolution perfusion MR, image quality was higher for k-t SENSE (3.6 vs. 3.1, p=0.04), endocardial dark rim artifacts were reduced (artifact thickness 1.6mm vs. 2.4mm, p<0.001) and CER similar. In patients, area under the ROC curve for detection of coronary stenosis was 0.89 and 0.80, p=0.21 for 3.0 Tesla and 1.5 Tesla, respectively. Conclusions k-t SENSE accelerated high-resolution perfusion MR at 3.0 Tesla is feasible with similar artifacts and diagnostic accuracy as at 1.5 Tesla. Compared with standard-resolution perfusion MR, image quality is improved and artifacts are reduced. PMID:18936311
Shao, Jiaxin; Rapacchi, Stanislas; Nguyen, Kim-Lien; Hu, Peng
2016-02-01
To develop an accurate and precise myocardial T1 mapping technique using an inversion recovery spoiled gradient echo readout at 3.0 Tesla (T). The modified Look-Locker inversion-recovery (MOLLI) sequence was modified to use fast low angle shot (FLASH) readout, incorporating a BLESSPC (Bloch Equation Simulation with Slice Profile Correction) T1 estimation algorithm, for accurate myocardial T1 mapping. The FLASH-MOLLI with BLESSPC fitting was compared with different approaches and sequences with regards to T1 estimation accuracy, precision and image artifact based on simulation, phantom studies, and in vivo studies of 10 healthy volunteers and three patients at 3.0 Tesla. The FLASH-MOLLI with BLESSPC fitting yields accurate T1 estimation (average error = -5.4 ± 15.1 ms, percentage error = -0.5% ± 1.2%) for T1 from 236-1852 ms and heart rate from 40-100 bpm in phantom studies. The FLASH-MOLLI sequence prevented off-resonance artifacts in all 10 healthy volunteers at 3.0T. In vivo, there was no significant difference between FLASH-MOLLI-derived myocardial T1 values and "ShMOLLI+IE" derived values (1458.9 ± 20.9 ms versus 1464.1 ± 6.8 ms, P = 0.50); However, the average precision by FLASH-MOLLI was significantly better than that generated by "ShMOLLI+IE" (1.84 ± 0.36% variance versus 3.57 ± 0.94%, P < 0.001). The FLASH-MOLLI with BLESSPC fitting yields accurate and precise T1 estimation, and eliminates banding artifacts associated with bSSFP at 3.0T. © 2015 Wiley Periodicals, Inc.
A Short Note on the Relationship between Pass Rate and Multiple Attempts
ERIC Educational Resources Information Center
Cheng, Ying; Liu, Cheng
2016-01-01
For a certification, licensure, or placement exam, allowing examinees to take multiple attempts at the test could effectively change the pass rate. Change in the pass rate can occur without any change in the underlying latent trait, and can be an artifact of multiple attempts and imperfect reliability of the test. By deriving formulae to compute…
Zhang, Shu-Bo; Lai, Jian-Huang
2016-07-15
Measuring the similarity between pairs of biological entities is important in molecular biology. The introduction of Gene Ontology (GO) provides us with a promising approach to quantifying the semantic similarity between two genes or gene products. This kind of similarity measure is closely associated with the GO terms annotated to biological entities under consideration and the structure of the GO graph. However, previous works in this field mainly focused on the upper part of the graph, and seldom concerned about the lower part. In this study, we aim to explore information from the lower part of the GO graph for better semantic similarity. We proposed a framework to quantify the similarity measure beneath a term pair, which takes into account both the information two ancestral terms share and the probability that they co-occur with their common descendants. The effectiveness of our approach was evaluated against seven typical measurements on public platform CESSM, protein-protein interaction and gene expression datasets. Experimental results consistently show that the similarity derived from the lower part contributes to better semantic similarity measure. The promising features of our approach are the following: (1) it provides a mirror model to characterize the information two ancestral terms share with respect to their common descendant; (2) it quantifies the probability that two terms co-occur with their common descendant in an efficient way; and (3) our framework can effectively capture the similarity measure beneath two terms, which can serve as an add-on to improve traditional semantic similarity measure between two GO terms. The algorithm was implemented in Matlab and is freely available from http://ejl.org.cn/bio/GOBeneath/. Copyright © 2016 Elsevier B.V. All rights reserved.
Sheldon, Signy; Moscovitch, Morris
2012-06-01
Recent investigations have shown that the medial temporal lobe (MTL), a region thought to be exclusive to episodic memory, can also influence performance on tests of semantic memory. The present study examined further the nature of MTL contributions to semantic memory tasks by tracking MTL activation as participants performed category fluency, a traditional test of semantic retrieval. For categories that were inherently autobiographical (e.g. names of friends), the MTLs were activated throughout the time period in which items were generated, consistent with the MTLs role in retrieving autobiographical memories. For categories that could not benefit from autobiographical or spatial/context information (e.g. governmental offices), the MTL was not implicated at any time point. For categories for which both prototypical and episodically-related information exists (e.g. kitchen utensils), there was more robust MTL activity for the open-ended, late generation periods compared with the more well-defined, early item generation time periods. We interpret these results as suggesting that early in the generation phase, responses are based on well-rehearsed prototypical knowledge whereas later performance relies more on open-ended strategies, such as deriving exemplars from personally relevant contextual information (e.g. imagining one's own kitchen). These findings and interpretation were consistent with the results of an initial, separate behavioral study (Expt 1), that used the distinctiveness of responses as a measure of open-endedness across the generation phase: Response distinctiveness corresponded to the predicted open-endedness of the various tasks at early and late phases. Overall, this is consistent with the view that as generation of semantic information becomes open-ended, it recruits processes from other domains, such as episodic memory, to support performance. Copyright © 2011 Wiley Periodicals, Inc.
Plancher, Gaën; Guyard, Anne; Nicolas, Serge; Piolino, Pascale
2009-10-01
It is well known that the occurrence of false memories increases with aging, but the results remain inconsistent concerning Alzheimer's disease (AD). Moreover, the mechanisms underlying the production of false memories are still unclear. Using an experimental episodic memory test with material based on the names of famous people in a procedure derived from the DRM paradigm [Roediger, H. L., III, & McDermott, K. B. (1995). Creating false memories: Remembering words not presented in lists. Journal of Experimental Psychology: Learning, Memory & Cognition, 21, 803-814], we examined correct and false recall and recognition in 30 young adults, 40 healthy older adults, and 30 patients with AD. Moreover, we evaluated the relationships between false memory performance, correct episodic memory performance, and a set of neuropsychological assessments evaluating the semantic memory and executive functions. The results clearly indicated that correct recall and recognition performance decreased with the subjects' age, but it decreased even more with AD. In addition, semantically related false recalls and false recognitions increased with age but not with dementia. On the contrary, non-semantically related false recalls and false recognitions increased with AD. Finally, the regression analyses showed that executive functions mediated related false memories and episodic memory mediated related and unrelated false memories in aging. Moreover, executive functions predicted related and unrelated false memories in AD, and episodic and semantic memory predicted semantically related and unrelated false memories in AD. In conclusion, the results obtained are consistent with the current constructive models of memory suggesting that false memory creation depends on different cognitive functions and, consequently, that the impairments of these functions influence the production of false memories.
Discovering discovery patterns with Predication-based Semantic Indexing.
Cohen, Trevor; Widdows, Dominic; Schvaneveldt, Roger W; Davies, Peter; Rindflesch, Thomas C
2012-12-01
In this paper we utilize methods of hyperdimensional computing to mediate the identification of therapeutically useful connections for the purpose of literature-based discovery. Our approach, named Predication-based Semantic Indexing, is utilized to identify empirically sequences of relationships known as "discovery patterns", such as "drug x INHIBITS substance y, substance y CAUSES disease z" that link pharmaceutical substances to diseases they are known to treat. These sequences are derived from semantic predications extracted from the biomedical literature by the SemRep system, and subsequently utilized to direct the search for known treatments for a held out set of diseases. Rapid and efficient inference is accomplished through the application of geometric operators in PSI space, allowing for both the derivation of discovery patterns from a large set of known TREATS relationships, and the application of these discovered patterns to constrain search for therapeutic relationships at scale. Our results include the rediscovery of discovery patterns that have been constructed manually by other authors in previous research, as well as the discovery of a set of previously unrecognized patterns. The application of these patterns to direct search through PSI space results in better recovery of therapeutic relationships than is accomplished with models based on distributional statistics alone. These results demonstrate the utility of efficient approximate inference in geometric space as a means to identify therapeutic relationships, suggesting a role of these methods in drug repurposing efforts. In addition, the results provide strong support for the utility of the discovery pattern approach pioneered by Hristovski and his colleagues. Copyright © 2012 Elsevier Inc. All rights reserved.
Discovering discovery patterns with predication-based Semantic Indexing
Cohen, Trevor; Widdows, Dominic; Schvaneveldt, Roger W.; Davies, Peter; Rindflesch, Thomas C.
2012-01-01
In this paper we utilize methods of hyperdimensional computing to mediate the identification of therapeutically useful connections for the purpose of literature-based discovery. Our approach, named Predication-based Semantic Indexing, is utilized to identify empirically sequences of relationships known as “discovery patterns”, such as “drug x INHIBITS substance y, substance y CAUSES disease z” that link pharmaceutical substances to diseases they are known to treat. These sequences are derived from semantic predications extracted from the biomedical literature by the SemRep system, and subsequently utilized to direct the search for known treatments for a held out set of diseases. Rapid and efficient inference is accomplished through the application of geometric operators in PSI space, allowing for both the derivation of discovery patterns from a large set of known TREATS relationships, and the application of these discovered patterns to constrain search for therapeutic relationships at scale. Our results include the rediscovery of discovery patterns that have been constructed manually by other authors in previous research, as well as the discovery of a set of previously unrecognized patterns. The application of these patterns to direct search through PSI space results in better recovery of therapeutic relationships than is accomplished with models based on distributional statistics alone. These results demonstrate the utility of efficient approximate inference in geometric space as a means to identify therapeutic relationships, suggesting a role of these methods in drug repurposing efforts. In addition, the results provide strong support for the utility of the discovery pattern approach pioneered by Hristovski and his colleagues. PMID:22841748
NASA Astrophysics Data System (ADS)
Wu, J.; Yao, W.; Zhang, J.; Li, Y.
2018-04-01
Labeling 3D point cloud data with traditional supervised learning methods requires considerable labelled samples, the collection of which is cost and time expensive. This work focuses on adopting domain adaption concept to transfer existing trained random forest classifiers (based on source domain) to new data scenes (target domain), which aims at reducing the dependence of accurate 3D semantic labeling in point clouds on training samples from the new data scene. Firstly, two random forest classifiers were firstly trained with existing samples previously collected for other data. They were different from each other by using two different decision tree construction algorithms: C4.5 with information gain ratio and CART with Gini index. Secondly, four random forest classifiers adapted to the target domain are derived through transferring each tree in the source random forest models with two types of operations: structure expansion and reduction-SER and structure transfer-STRUT. Finally, points in target domain are labelled by fusing the four newly derived random forest classifiers using weights of evidence based fusion model. To validate our method, experimental analysis was conducted using 3 datasets: one is used as the source domain data (Vaihingen data for 3D Semantic Labelling); another two are used as the target domain data from two cities in China (Jinmen city and Dunhuang city). Overall accuracies of 85.5 % and 83.3 % for 3D labelling were achieved for Jinmen city and Dunhuang city data respectively, with only 1/3 newly labelled samples compared to the cases without domain adaption.
Why Are Experts Correlated? Decomposing Correlations between Judges
ERIC Educational Resources Information Center
Broomell, Stephen B.; Budescu, David V.
2009-01-01
We derive an analytic model of the inter-judge correlation as a function of five underlying parameters. Inter-cue correlation and the number of cues capture our assumptions about the environment, while differentiations between cues, the weights attached to the cues, and (un)reliability describe assumptions about the judges. We study the relative…
Quantitative approaches to information recovery from black holes
NASA Astrophysics Data System (ADS)
Balasubramanian, Vijay; Czech, Bartłomiej
2011-08-01
The evaporation of black holes into apparently thermal radiation poses a serious conundrum for theoretical physics: at face value, it appears that in the presence of a black hole, quantum evolution is non-unitary and destroys information. This information loss paradox has its seed in the presence of a horizon causally separating the interior and asymptotic regions in a black hole spacetime. A quantitative resolution of the paradox could take several forms: (a) a precise argument that the underlying quantum theory is unitary, and that information loss must be an artifact of approximations in the derivation of black hole evaporation, (b) an explicit construction showing how information can be recovered by the asymptotic observer, (c) a demonstration that the causal disconnection of the black hole interior from infinity is an artifact of the semiclassical approximation. This review summarizes progress on all these fronts.
NASA Astrophysics Data System (ADS)
Robinson, Wayne D.; Patt, Frederick S.; Franz, Bryan A.; Turpie, Kevin R.; McClain, Charles R.
2009-08-01
One of the roles of the VIIRS Ocean Science Team (VOST) is to assess the performance of the instrument and scientific processing software that generates ocean color parameters such as normalized water-leaving radiances and chlorophyll. A VIIRS data simulator is being developed to help aid in this work. The simulator will create a sufficient set of simulated Sensor Data Records (SDR) so that the ocean component of the VIIRS processing system can be tested. It will also have the ability to study the impact of instrument artifacts on the derived parameter quality. The simulator will use existing resources available to generate the geolocation information and to transform calibrated radiances to geophysical parameters and visa-versa. In addition, the simulator will be able to introduce land features, cloud fields, and expected VIIRS instrument artifacts. The design of the simulator and its progress will be presented.
Automatic Artifact Removal from Electroencephalogram Data Based on A Priori Artifact Information.
Zhang, Chi; Tong, Li; Zeng, Ying; Jiang, Jingfang; Bu, Haibing; Yan, Bin; Li, Jianxin
2015-01-01
Electroencephalogram (EEG) is susceptible to various nonneural physiological artifacts. Automatic artifact removal from EEG data remains a key challenge for extracting relevant information from brain activities. To adapt to variable subjects and EEG acquisition environments, this paper presents an automatic online artifact removal method based on a priori artifact information. The combination of discrete wavelet transform and independent component analysis (ICA), wavelet-ICA, was utilized to separate artifact components. The artifact components were then automatically identified using a priori artifact information, which was acquired in advance. Subsequently, signal reconstruction without artifact components was performed to obtain artifact-free signals. The results showed that, using this automatic online artifact removal method, there were statistical significant improvements of the classification accuracies in both two experiments, namely, motor imagery and emotion recognition.
Automatic Artifact Removal from Electroencephalogram Data Based on A Priori Artifact Information
Zhang, Chi; Tong, Li; Zeng, Ying; Jiang, Jingfang; Bu, Haibing; Li, Jianxin
2015-01-01
Electroencephalogram (EEG) is susceptible to various nonneural physiological artifacts. Automatic artifact removal from EEG data remains a key challenge for extracting relevant information from brain activities. To adapt to variable subjects and EEG acquisition environments, this paper presents an automatic online artifact removal method based on a priori artifact information. The combination of discrete wavelet transform and independent component analysis (ICA), wavelet-ICA, was utilized to separate artifact components. The artifact components were then automatically identified using a priori artifact information, which was acquired in advance. Subsequently, signal reconstruction without artifact components was performed to obtain artifact-free signals. The results showed that, using this automatic online artifact removal method, there were statistical significant improvements of the classification accuracies in both two experiments, namely, motor imagery and emotion recognition. PMID:26380294
Smart Data Infrastructure: The Sixth Generation of Mediation for Data Science
NASA Astrophysics Data System (ADS)
Fox, P. A.
2014-12-01
In the emergent "fourth paradigm" (data-driven) science, the scientific method is enhanced by the integration of significant data sources into the practice of scientific research. To address Big Science, there are challenges in understanding the role of data in enabling researchers to attack not just disciplinary issues, but also the system-level, large-scale, and transdisciplinary global scientific challenges facing society.Recognizing that the volume of data is only one of many dimensions to be considered, there is a clear need for improved data infrastructures to mediate data and information exchange, which we contend will need to be powered by semantic technologies. One clear need is to provide computational approaches for researchers to discover appropriate data resources, rapidly integrate data collections from heterogeneously resources or multiple data sets, and inter-compare results to allow generation and validation of hypotheses. Another trend is toward automated tools that allow researchers to better find and reuse data that they currently don't know they need, let alone know how to find. Again semantic technologies will be required. Finally, to turn data analytics from "art to science", technical solutions are needed for cross-dataset validation, reproducibility studies on data-driven results, and the concomitant citation of data products allowing recognition for those who curate and share important data resources.
Gold, Carl A.; Marchant, Natalie L.; Koutstaal, Wilma; Schacter, Daniel L.; Budson, Andrew E.
2012-01-01
The presence or absence of conceptual information in pictorial stimuli may explain the mixed findings of previous studies of false recognition in patients with mild Alzheimer’s disease (AD). To test this hypothesis, 48 patients with AD were compared to 48 healthy older adults on a recognition task first described by Koutstaal et al. (2003). Participants studied and were tested on their memory for categorized ambiguous pictures of common objects. The presence of conceptual information at study and/or test was manipulated by providing or withholding disambiguating semantic labels. Analyses focused on testing two competing theories. The semantic encoding hypothesis, which posits that the inter-item perceptual details are not encoded by AD patients when conceptual information is present in the stimuli, was not supported by the findings. In contrast, the conceptual fluency hypothesis was supported. Enhanced conceptual fluency at test dramatically shifted AD patients to a more liberal response bias, raising their false recognition. These results suggest that patients with AD rely on the fluency of test items in making recognition memory decisions. We speculate that AD patients’ over reliance upon fluency may be attributable to (1) dysfunction of the hippocampus, disrupting recollection, and/or (2) dysfunction of prefrontal cortex, disrupting post-retrieval processes. PMID:17573074
Linear multivariate evaluation models for spatial perception of soundscape.
Deng, Zhiyong; Kang, Jian; Wang, Daiwei; Liu, Aili; Kang, Joe Zhengyu
2015-11-01
Soundscape is a sound environment that emphasizes the awareness of auditory perception and social or cultural understandings. The case of spatial perception is significant to soundscape. However, previous studies on the auditory spatial perception of the soundscape environment have been limited. Based on 21 native binaural-recorded soundscape samples and a set of auditory experiments for subjective spatial perception (SSP), a study of the analysis among semantic parameters, the inter-aural-cross-correlation coefficient (IACC), A-weighted-equal sound-pressure-level (L(eq)), dynamic (D), and SSP is introduced to verify the independent effect of each parameter and to re-determine some of their possible relationships. The results show that the more noisiness the audience perceived, the worse spatial awareness they received, while the closer and more directional the sound source image variations, dynamics, and numbers of sound sources in the soundscape are, the better the spatial awareness would be. Thus, the sensations of roughness, sound intensity, transient dynamic, and the values of Leq and IACC have a suitable range for better spatial perception. A better spatial awareness seems to promote the preference slightly for the audience. Finally, setting SSPs as functions of the semantic parameters and Leq-D-IACC, two linear multivariate evaluation models of subjective spatial perception are proposed.
Makeyev, Oleksandr; Besio, Walter G.
2016-01-01
Noninvasive concentric ring electrodes are a promising alternative to conventional disc electrodes. Currently, the superiority of tripolar concentric ring electrodes over disc electrodes, in particular, in accuracy of Laplacian estimation, has been demonstrated in a range of applications. In our recent work, we have shown that accuracy of Laplacian estimation can be improved with multipolar concentric ring electrodes using a general approach to estimation of the Laplacian for an (n + 1)-polar electrode with n rings using the (4n + 1)-point method for n ≥ 2. This paper takes the next step toward further improving the Laplacian estimate by proposing novel variable inter-ring distances concentric ring electrodes. Derived using a modified (4n + 1)-point method, linearly increasing and decreasing inter-ring distances tripolar (n = 2) and quadripolar (n = 3) electrode configurations are compared to their constant inter-ring distances counterparts. Finite element method modeling and analytic results are consistent and suggest that increasing inter-ring distances electrode configurations may decrease the truncation error resulting in more accurate Laplacian estimates compared to respective constant inter-ring distances configurations. For currently used tripolar electrode configuration, the truncation error may be decreased more than two-fold, while for the quadripolar configuration more than a six-fold decrease is expected. PMID:27294933
Makeyev, Oleksandr; Besio, Walter G
2016-06-10
Noninvasive concentric ring electrodes are a promising alternative to conventional disc electrodes. Currently, the superiority of tripolar concentric ring electrodes over disc electrodes, in particular, in accuracy of Laplacian estimation, has been demonstrated in a range of applications. In our recent work, we have shown that accuracy of Laplacian estimation can be improved with multipolar concentric ring electrodes using a general approach to estimation of the Laplacian for an (n + 1)-polar electrode with n rings using the (4n + 1)-point method for n ≥ 2. This paper takes the next step toward further improving the Laplacian estimate by proposing novel variable inter-ring distances concentric ring electrodes. Derived using a modified (4n + 1)-point method, linearly increasing and decreasing inter-ring distances tripolar (n = 2) and quadripolar (n = 3) electrode configurations are compared to their constant inter-ring distances counterparts. Finite element method modeling and analytic results are consistent and suggest that increasing inter-ring distances electrode configurations may decrease the truncation error resulting in more accurate Laplacian estimates compared to respective constant inter-ring distances configurations. For currently used tripolar electrode configuration, the truncation error may be decreased more than two-fold, while for the quadripolar configuration more than a six-fold decrease is expected.
Makeyev, Oleksandr; Besio, Walter G
2016-08-01
Noninvasive concentric ring electrodes are a promising alternative to conventional disc electrodes. Currently, superiority of tripolar concentric ring electrodes over disc electrodes, in particular, in accuracy of Laplacian estimation has been demonstrated in a range of applications. In our recent work we have shown that accuracy of Laplacian estimation can be improved with multipolar concentric ring electrodes using a general approach to estimation of the Laplacian for an (n + 1)-polar electrode with n rings using the (4n + 1)-point method for n ≥ 2. This paper takes the next step toward further improving the Laplacian estimate by proposing novel variable inter-ring distances concentric ring electrodes. Derived using a modified (4n + 1)-point method, linearly increasing and decreasing inter-ring distances tripolar (n = 2) and quadripolar (n = 3) electrode configurations are compared to their constant inter-ring distances counterparts using finite element method modeling. Obtained results suggest that increasing inter-ring distances electrode configurations may decrease the estimation error resulting in more accurate Laplacian estimates compared to respective constant inter-ring distances configurations. For currently used tripolar electrode configuration the estimation error may be decreased more than two-fold while for the quadripolar configuration more than six-fold decrease is expected.
A Community-Driven Workflow Recommendations and Reuse Infrastructure
NASA Astrophysics Data System (ADS)
Zhang, J.; Votava, P.; Lee, T. J.; Lee, C.; Xiao, S.; Nemani, R. R.; Foster, I.
2013-12-01
Aiming to connect the Earth science community to accelerate the rate of discovery, NASA Earth Exchange (NEX) has established an online repository and platform, so that researchers can publish and share their tools and models with colleagues. In recent years, workflow has become a popular technique at NEX for Earth scientists to define executable multi-step procedures for data processing and analysis. The ability to discover and reuse knowledge (sharable workflows or workflow) is critical to the future advancement of science. However, as reported in our earlier study, the reusability of scientific artifacts at current time is very low. Scientists often do not feel confident in using other researchers' tools and utilities. One major reason is that researchers are often unaware of the existence of others' data preprocessing processes. Meanwhile, researchers often do not have time to fully document the processes and expose them to others in a standard way. These issues cannot be overcome by the existing workflow search technologies used in NEX and other data projects. Therefore, this project aims to develop a proactive recommendation technology based on collective NEX user behaviors. In this way, we aim to promote and encourage process and workflow reuse within NEX. Particularly, we focus on leveraging peer scientists' best practices to support the recommendation of artifacts developed by others. Our underlying theoretical foundation is rooted in the social cognitive theory, which declares people learn by watching what others do. Our fundamental hypothesis is that sharable artifacts have network properties, much like humans in social networks. More generally, reusable artifacts form various types of social relationships (ties), and may be viewed as forming what organizational sociologists who use network analysis to study human interactions call a 'knowledge network.' In particular, we will tackle two research questions: R1: What hidden knowledge may be extracted from usage history to help Earth scientists better understand existing artifacts and how to use them in a proper manner? R2: Informed by insights derived from their computing contexts, how could such hidden knowledge be used to facilitate artifact reuse by Earth scientists? Our study of the two research questions will provide answers to three technical questions aiming to assist NEX users during workflow development: 1) How to determine what topics interest the researcher? 2) How to find appropriate artifacts? and 3) How to advise the researcher in artifact reuse? In this paper, we report our on-going efforts of leveraging social networking theory and analysis techniques to provide dynamic advice on artifact reuse to NEX users based on their surrounding contexts. As a proof of concept, we have designed and developed a plug-in to the VisTrails workflow design tool. When users develop workflows using VisTrails, our plug-in will proactively recommend most relevant sub-workflows to the users.
On the Importance of Small Ice Crystals in Tropical Anvil Cirrus
NASA Technical Reports Server (NTRS)
Jensen, E. J.; Lawson, P.; Baker, B.; Pilson, B.; Mo, Q.; Heymsfield, A. J.; Bansemer, A.; Bui, T. P.; McGill, M.; Hlavka, D.;
2009-01-01
In situ measurements of ice crystal concentrations and sizes made with aircraft instrumentation over the past two decades have often indicated the presence of numerous relatively small (< 50 m diameter) crystals in cirrus clouds. Further, these measurements frequently indicate that small crystals account for a large fraction of the extinction in cirrus clouds. The fact that the instruments used to make these measurements, such as the Forward Scattering Spectrometer Probe (FSSP) and the Cloud Aerosol Spectrometer (CAS), ingest ice crystals into the sample volume through inlets has led to suspicion that the indications of numerous small ]crystals could be artifacts of large ]crystal shattering on the instrument inlets. We present new aircraft measurements in anvil cirrus sampled during the Tropical Composition, Cloud, and Climate Coupling (TC4) campaign with the 2 ] Dimensional Stereo (2D ]S) probe, which detects particles as small as 10 m. The 2D ]S has detector "arms" instead of an inlet tube. Since the 2D ]S probe surfaces are much further from the sample volume than is the case for the instruments with inlets, it is expected that 2D ]S will be less susceptible to shattering artifacts. In addition, particle inter ]arrival times are used to identify and remove shattering artifacts that occur even with the 2D ]S probe. The number of shattering artifacts identified by the 2D ]S interarrival time analysis ranges from a negligible contribution to an order of magnitude or more enhancement in apparent ice concentration over the natural ice concentration, depending on the abundance of large crystals and the natural small ]crystal concentration. The 2D ]S measurements in tropical anvil cirrus suggest that natural small ]crystal concentrations are typically one to two orders of magnitude lower than those inferred from CAS. The strong correlation between the CAS/2D ]S ratio of small ]crystal concentrations and large ]crystal concentration suggests that the discrepancy is likely caused by shattering of large crystals on the CAS inlet. We argue that past measurements with CAS in cirrus with large crystals present may contain errors due to crystal shattering, and past conclusions derived from these measurements may need to be revisited. Further, we present correlations between CAS spurious concentration and 2D ]S large ]crystal mass from spatially uniform anvil cirrus sampling periods as an approximate guide for estimating quantitative impact of large ]crystal shattering on CAS concentrations in previous datasets. We use radiative transfer calculations to demonstrate that in the maritime anvil cirrus sampled during TC4, small crystals indicated by 2D ]S contribute relatively little cloud extinction, radiative forcing, or radiative heating in the anvils, regardless of anvil age or vertical location in the clouds. While 2D ]S ice concentrations in fresh anvil cirrus may often exceed 1 cm.3, and are observed to exceed 10 cm.3 in turrets, they are typically 0.1 cm.3 and rarely exceed 1 cm.3 (<1.4% of the time) in aged anvil cirrus. We hypothesize that isolated occurrences of higher ice concentrations in aged anvil cirrus may be caused by ice nucleation driven by either small ]scale convection or gravity waves. It appears that the numerous small crystals detrained from convective updrafts do not persist in the anvil cirrus sampled during TC ]4.