NASA Astrophysics Data System (ADS)
Müller, Henning; Kalpathy-Cramer, Jayashree; Kahn, Charles E., Jr.; Hersh, William
2009-02-01
Content-based visual information (or image) retrieval (CBIR) has been an extremely active research domain within medical imaging over the past ten years, with the goal of improving the management of visual medical information. Many technical solutions have been proposed, and application scenarios for image retrieval as well as image classification have been set up. However, in contrast to medical information retrieval using textual methods, visual retrieval has only rarely been applied in clinical practice. This is despite the large amount and variety of visual information produced in hospitals every day. This information overload imposes a significant burden upon clinicians, and CBIR technologies have the potential to help the situation. However, in order for CBIR to become an accepted clinical tool, it must demonstrate a higher level of technical maturity than it has to date. Since 2004, the ImageCLEF benchmark has included a task for the comparison of visual information retrieval algorithms for medical applications. In 2005, a task for medical image classification was introduced and both tasks have been run successfully for the past four years. These benchmarks allow an annual comparison of visual retrieval techniques based on the same data sets and the same query tasks, enabling the meaningful comparison of various retrieval techniques. The datasets used from 2004-2007 contained images and annotations from medical teaching files. In 2008, however, the dataset used was made up of 67,000 images (along with their associated figure captions and the full text of their corresponding articles) from two Radiological Society of North America (RSNA) scientific journals. This article describes the results of the medical image retrieval task of the ImageCLEF 2008 evaluation campaign. We compare the retrieval results of both visual and textual information retrieval systems from 15 research groups on the aforementioned data set. The results show clearly that, currently, visual retrieval alone does not achieve the performance necessary for real-world clinical applications. Most of the common visual retrieval techniques have a MAP (Mean Average Precision) of around 2-3%, which is much lower than that achieved using textual retrieval (MAP=29%). Advanced machine learning techniques, together with good training data, have been shown to improve the performance of visual retrieval systems in the past. Multimodal retrieval (basing retrieval on both visual and textual information) can achieve better results than purely visual, but only when carefully applied. In many cases, multimodal retrieval systems performed even worse than purely textual retrieval systems. On the other hand, some multimodal retrieval systems demonstrated significantly increased early precision, which has been shown to be a desirable behavior in real-world systems.
Data Visualization in Information Retrieval and Data Mining (SIG VIS).
ERIC Educational Resources Information Center
Efthimiadis, Efthimis
2000-01-01
Presents abstracts that discuss using data visualization for information retrieval and data mining, including immersive information space and spatial metaphors; spatial data using multi-dimensional matrices with maps; TREC (Text Retrieval Conference) experiments; users' information needs in cartographic information retrieval; and users' relevance…
Diversification of visual media retrieval results using saliency detection
NASA Astrophysics Data System (ADS)
Muratov, Oleg; Boato, Giulia; De Natale, Franesco G. B.
2013-03-01
Diversification of retrieval results allows for better and faster search. Recently there has been proposed different methods for diversification of image retrieval results mainly utilizing text information and techniques imported from natural language processing domain. However, images contain visual information that is impossible to describe in text and the use of visual features is inevitable. Visual saliency is information about the main object of an image implicitly included by humans while creating visual content. For this reason it is naturally to exploit this information for the task of diversification of the content. In this work we study whether visual saliency can be used for the task of diversification and propose a method for re-ranking image retrieval results using saliency. The evaluation has shown that the use of saliency information results in higher diversity of retrieval results.
Mobile medical visual information retrieval.
Depeursinge, Adrien; Duc, Samuel; Eggel, Ivan; Müller, Henning
2012-01-01
In this paper, we propose mobile access to peer-reviewed medical information based on textual search and content-based visual image retrieval. Web-based interfaces designed for limited screen space were developed to query via web services a medical information retrieval engine optimizing the amount of data to be transferred in wireless form. Visual and textual retrieval engines with state-of-the-art performance were integrated. Results obtained show a good usability of the software. Future use in clinical environments has the potential of increasing quality of patient care through bedside access to the medical literature in context.
Information Visualization and Proposing New Interface for Movie Retrieval System (IMDB)
ERIC Educational Resources Information Center
Etemadpour, Ronak; Masood, Mona; Belaton, Bahari
2010-01-01
This research studies the development of a new prototype of visualization in support of movie retrieval. The goal of information visualization is unveiling of large amounts of data or abstract data set using visual presentation. With this knowledge the main goal is to develop a 2D presentation of information on movies from the IMDB (Internet Movie…
Visual working memory buffers information retrieved from visual long-term memory.
Fukuda, Keisuke; Woodman, Geoffrey F
2017-05-16
Human memory is thought to consist of long-term storage and short-term storage mechanisms, the latter known as working memory. Although it has long been assumed that information retrieved from long-term memory is represented in working memory, we lack neural evidence for this and need neural measures that allow us to watch this retrieval into working memory unfold with high temporal resolution. Here, we show that human electrophysiology can be used to track information as it is brought back into working memory during retrieval from long-term memory. Specifically, we found that the retrieval of information from long-term memory was limited to just a few simple objects' worth of information at once, and elicited a pattern of neurophysiological activity similar to that observed when people encode new information into working memory. Our findings suggest that working memory is where information is buffered when being retrieved from long-term memory and reconcile current theories of memory retrieval with classic notions about the memory mechanisms involved.
Visual working memory buffers information retrieved from visual long-term memory
Fukuda, Keisuke; Woodman, Geoffrey F.
2017-01-01
Human memory is thought to consist of long-term storage and short-term storage mechanisms, the latter known as working memory. Although it has long been assumed that information retrieved from long-term memory is represented in working memory, we lack neural evidence for this and need neural measures that allow us to watch this retrieval into working memory unfold with high temporal resolution. Here, we show that human electrophysiology can be used to track information as it is brought back into working memory during retrieval from long-term memory. Specifically, we found that the retrieval of information from long-term memory was limited to just a few simple objects’ worth of information at once, and elicited a pattern of neurophysiological activity similar to that observed when people encode new information into working memory. Our findings suggest that working memory is where information is buffered when being retrieved from long-term memory and reconcile current theories of memory retrieval with classic notions about the memory mechanisms involved. PMID:28461479
Data Discretization for Novel Relationship Discovery in Information Retrieval.
ERIC Educational Resources Information Center
Benoit, G.
2002-01-01
Describes an information retrieval, visualization, and manipulation model which offers the user multiple ways to exploit the retrieval set, based on weighted query terms, via an interactive interface. Outlines the mathematical model and describes an information retrieval application built on the model to search structured and full-text files.…
Combining textual and visual information for image retrieval in the medical domain.
Gkoufas, Yiannis; Morou, Anna; Kalamboukis, Theodore
2011-01-01
In this article we have assembled the experience obtained from our participation in the imageCLEF evaluation task over the past two years. Exploitation on the use of linear combinations for image retrieval has been attempted by combining visual and textual sources of images. From our experiments we conclude that a mixed retrieval technique that applies both textual and visual retrieval in an interchangeably repeated manner improves the performance while overcoming the scalability limitations of visual retrieval. In particular, the mean average precision (MAP) has increased from 0.01 to 0.15 and 0.087 for 2009 and 2010 data, respectively, when content-based image retrieval (CBIR) is performed on the top 1000 results from textual retrieval based on natural language processing (NLP).
NASA Technical Reports Server (NTRS)
Dominick, Wayne D. (Editor); Triantafyllopoulos, Spiros
1985-01-01
A collection of presentation visuals associated with the companion report entitled KARL: A Knowledge-Assisted Retrieval Language, is presented. Information is given on data retrieval, natural language database front ends, generic design objectives, processing capababilities and the query processing cycle.
36 CFR 1194.31 - Functional performance criteria.
Code of Federal Regulations, 2011 CFR
2011-07-01
... information retrieval that does not require user vision shall be provided, or support for assistive technology... and information retrieval that does not require visual acuity greater than 20/70 shall be provided in... information retrieval that does not require user hearing shall be provided, or support for assistive...
36 CFR 1194.31 - Functional performance criteria.
Code of Federal Regulations, 2014 CFR
2014-07-01
... information retrieval that does not require user vision shall be provided, or support for assistive technology... and information retrieval that does not require visual acuity greater than 20/70 shall be provided in... information retrieval that does not require user hearing shall be provided, or support for assistive...
36 CFR 1194.31 - Functional performance criteria.
Code of Federal Regulations, 2012 CFR
2012-07-01
... information retrieval that does not require user vision shall be provided, or support for assistive technology... and information retrieval that does not require visual acuity greater than 20/70 shall be provided in... information retrieval that does not require user hearing shall be provided, or support for assistive...
NASA Technical Reports Server (NTRS)
Dominick, Wayne D. (Editor); Gallagher, Mary C.
1985-01-01
This Working Paper Series entry represents a collection of presentation visuals associated with the companion report entitled An Innovative, Multidisciplinary Educational Program in Interactive Information Storage and Retrieval, USL/DBMS NASA/RECON Working Paper Series report number DBMS.NASA/RECON-12. The project objectives are to develop a set of transportable, hands-on, data base management courses for science and engineering students to facilitate their utilization of information storage and retrieval programs.
Karlsson, Kristina; Sikström, Sverker; Willander, Johan
2013-01-01
The semantic content, or the meaning, is the essence of autobiographical memories. In comparison to previous research, which has mainly focused on the phenomenological experience and the age distribution of retrieved events, the present study provides a novel view on the retrieval of event information by quantifying the information as semantic representations. We investigated the semantic representation of sensory cued autobiographical events and studied the modality hierarchy within the multimodal retrieval cues. The experiment comprised a cued recall task, where the participants were presented with visual, auditory, olfactory or multimodal retrieval cues and asked to recall autobiographical events. The results indicated that the three different unimodal retrieval cues generate significantly different semantic representations. Further, the auditory and the visual modalities contributed the most to the semantic representation of the multimodally retrieved events. Finally, the semantic representation of the multimodal condition could be described as a combination of the three unimodal conditions. In conclusion, these results suggest that the meaning of the retrieved event information depends on the modality of the retrieval cues.
Karlsson, Kristina; Sikström, Sverker; Willander, Johan
2013-01-01
The semantic content, or the meaning, is the essence of autobiographical memories. In comparison to previous research, which has mainly focused on the phenomenological experience and the age distribution of retrieved events, the present study provides a novel view on the retrieval of event information by quantifying the information as semantic representations. We investigated the semantic representation of sensory cued autobiographical events and studied the modality hierarchy within the multimodal retrieval cues. The experiment comprised a cued recall task, where the participants were presented with visual, auditory, olfactory or multimodal retrieval cues and asked to recall autobiographical events. The results indicated that the three different unimodal retrieval cues generate significantly different semantic representations. Further, the auditory and the visual modalities contributed the most to the semantic representation of the multimodally retrieved events. Finally, the semantic representation of the multimodal condition could be described as a combination of the three unimodal conditions. In conclusion, these results suggest that the meaning of the retrieved event information depends on the modality of the retrieval cues. PMID:24204561
Visual imagery in autobiographical memory: The role of repeated retrieval in shifting perspective
Butler, Andrew C.; Rice, Heather J.; Wooldridge, Cynthia L.; Rubin, David C.
2016-01-01
Recent memories are generally recalled from a first-person perspective whereas older memories are often recalled from a third-person perspective. We investigated how repeated retrieval affects the availability of visual information, and whether it could explain the observed shift in perspective with time. In Experiment 1, participants performed mini-events and nominated memories of recent autobiographical events in response to cue words. Next, they described their memory for each event and rated its phenomenological characteristics. Over the following three weeks, they repeatedly retrieved half of the mini-event and cue-word memories. No instructions were given about how to retrieve the memories. In Experiment 2, participants were asked to adopt either a first- or third-person perspective during retrieval. One month later, participants retrieved all of the memories and again provided phenomenology ratings. When first-person visual details from the event were repeatedly retrieved, this information was retained better and the shift in perspective was slowed. PMID:27064539
Willander, Johan; Sikström, Sverker; Karlsson, Kristina
2015-01-01
Previous studies on autobiographical memory have focused on unimodal retrieval cues (i.e., cues pertaining to one modality). However, from an ecological perspective multimodal cues (i.e., cues pertaining to several modalities) are highly important to investigate. In the present study we investigated age distributions and experiential ratings of autobiographical memories retrieved with unimodal and multimodal cues. Sixty-two participants were randomized to one of four cue-conditions: visual, olfactory, auditory, or multimodal. The results showed that the peak of the distributions depends on the modality of the retrieval cue. The results indicated that multimodal retrieval seemed to be driven by visual and auditory information to a larger extent and to a lesser extent by olfactory information. Finally, no differences were observed in the number of retrieved memories or experiential ratings across the four cue-conditions.
36 CFR § 1194.31 - Functional performance criteria.
Code of Federal Regulations, 2013 CFR
2013-07-01
... information retrieval that does not require user vision shall be provided, or support for assistive technology... and information retrieval that does not require visual acuity greater than 20/70 shall be provided in... information retrieval that does not require user hearing shall be provided, or support for assistive...
NASA Astrophysics Data System (ADS)
Rahman, Md M.; Antani, Sameer K.; Demner-Fushman, Dina; Thoma, George R.
2015-03-01
This paper presents a novel approach to biomedical image retrieval by mapping image regions to local concepts and represent images in a weighted entropy-based concept feature space. The term concept refers to perceptually distinguishable visual patches that are identified locally in image regions and can be mapped to a glossary of imaging terms. Further, the visual significance (e.g., visualness) of concepts is measured as Shannon entropy of pixel values in image patches and is used to refine the feature vector. Moreover, the system can assist user in interactively select a Region-Of-Interest (ROI) and search for similar image ROIs. Further, a spatial verification step is used as a post-processing step to improve retrieval results based on location information. The hypothesis that such approaches would improve biomedical image retrieval, is validated through experiments on a data set of 450 lung CT images extracted from journal articles from four different collections.
NASA Technical Reports Server (NTRS)
Dominick, Wayne D. (Editor); Liu, I-Hsiung
1985-01-01
This Working Paper Series entry represents a collection of presentation visuals associated with the companion report entitled Natural Language Query System Design for Interactive Information Storage and Retrieval Systems, USL/DBMS NASA/RECON Working Paper Series report number DBMS.NASA/RECON-17.
Episodic Memory Retrieval Functionally Relies on Very Rapid Reactivation of Sensory Information.
Waldhauser, Gerd T; Braun, Verena; Hanslmayr, Simon
2016-01-06
Episodic memory retrieval is assumed to rely on the rapid reactivation of sensory information that was present during encoding, a process termed "ecphory." We investigated the functional relevance of this scarcely understood process in two experiments in human participants. We presented stimuli to the left or right of fixation at encoding, followed by an episodic memory test with centrally presented retrieval cues. This allowed us to track the reactivation of lateralized sensory memory traces during retrieval. Successful episodic retrieval led to a very early (∼100-200 ms) reactivation of lateralized alpha/beta (10-25 Hz) electroencephalographic (EEG) power decreases in the visual cortex contralateral to the visual field at encoding. Applying rhythmic transcranial magnetic stimulation to interfere with early retrieval processing in the visual cortex led to decreased episodic memory performance specifically for items encoded in the visual field contralateral to the site of stimulation. These results demonstrate, for the first time, that episodic memory functionally relies on very rapid reactivation of sensory information. Remembering personal experiences requires a "mental time travel" to revisit sensory information perceived in the past. This process is typically described as a controlled, relatively slow process. However, by using electroencephalography to measure neural activity with a high time resolution, we show that such episodic retrieval entails a very rapid reactivation of sensory brain areas. Using transcranial magnetic stimulation to alter brain function during retrieval revealed that this early sensory reactivation is causally relevant for conscious remembering. These results give first neural evidence for a functional, preconscious component of episodic remembering. This provides new insight into the nature of human memory and may help in the understanding of psychiatric conditions that involve the automatic intrusion of unwanted memories. Copyright © 2016 the authors 0270-6474/16/360251-10$15.00/0.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Choo, Jaegul; Kim, Hannah; Clarkson, Edward
In this paper, we present an interactive visual information retrieval and recommendation system, called VisIRR, for large-scale document discovery. VisIRR effectively combines the paradigms of (1) a passive pull through query processes for retrieval and (2) an active push that recommends items of potential interest to users based on their preferences. Equipped with an efficient dynamic query interface against a large-scale corpus, VisIRR organizes the retrieved documents into high-level topics and visualizes them in a 2D space, representing the relationships among the topics along with their keyword summary. In addition, based on interactive personalized preference feedback with regard to documents,more » VisIRR provides document recommendations from the entire corpus, which are beyond the retrieved sets. Such recommended documents are visualized in the same space as the retrieved documents, so that users can seamlessly analyze both existing and newly recommended ones. This article presents novel computational methods, which make these integrated representations and fast interactions possible for a large-scale document corpus. We illustrate how the system works by providing detailed usage scenarios. Finally, we present preliminary user study results for evaluating the effectiveness of the system.« less
Choo, Jaegul; Kim, Hannah; Clarkson, Edward; ...
2018-01-31
In this paper, we present an interactive visual information retrieval and recommendation system, called VisIRR, for large-scale document discovery. VisIRR effectively combines the paradigms of (1) a passive pull through query processes for retrieval and (2) an active push that recommends items of potential interest to users based on their preferences. Equipped with an efficient dynamic query interface against a large-scale corpus, VisIRR organizes the retrieved documents into high-level topics and visualizes them in a 2D space, representing the relationships among the topics along with their keyword summary. In addition, based on interactive personalized preference feedback with regard to documents,more » VisIRR provides document recommendations from the entire corpus, which are beyond the retrieved sets. Such recommended documents are visualized in the same space as the retrieved documents, so that users can seamlessly analyze both existing and newly recommended ones. This article presents novel computational methods, which make these integrated representations and fast interactions possible for a large-scale document corpus. We illustrate how the system works by providing detailed usage scenarios. Finally, we present preliminary user study results for evaluating the effectiveness of the system.« less
Content-Based Medical Image Retrieval
NASA Astrophysics Data System (ADS)
Müller, Henning; Deserno, Thomas M.
This chapter details the necessity for alternative access concepts to the currently mainly text-based methods in medical information retrieval. This need is partly due to the large amount of visual data produced, the increasing variety of medical imaging data and changing user patterns. The stored visual data contain large amounts of unused information that, if well exploited, can help diagnosis, teaching and research. The chapter briefly reviews the history of image retrieval and its general methods before technologies that have been developed in the medical domain are focussed. We also discuss evaluation of medical content-based image retrieval (CBIR) systems and conclude with pointing out their strengths, gaps, and further developments. As examples, the MedGIFT project and the Image Retrieval in Medical Applications (IRMA) framework are presented.
Rahman, Md Mahmudur; Antani, Sameer K; Demner-Fushman, Dina; Thoma, George R
2015-10-01
This article presents an approach to biomedical image retrieval by mapping image regions to local concepts where images are represented in a weighted entropy-based concept feature space. The term "concept" refers to perceptually distinguishable visual patches that are identified locally in image regions and can be mapped to a glossary of imaging terms. Further, the visual significance (e.g., visualness) of concepts is measured as the Shannon entropy of pixel values in image patches and is used to refine the feature vector. Moreover, the system can assist the user in interactively selecting a region-of-interest (ROI) and searching for similar image ROIs. Further, a spatial verification step is used as a postprocessing step to improve retrieval results based on location information. The hypothesis that such approaches would improve biomedical image retrieval is validated through experiments on two different data sets, which are collected from open access biomedical literature.
Rahman, Md. Mahmudur; Antani, Sameer K.; Demner-Fushman, Dina; Thoma, George R.
2015-01-01
Abstract. This article presents an approach to biomedical image retrieval by mapping image regions to local concepts where images are represented in a weighted entropy-based concept feature space. The term “concept” refers to perceptually distinguishable visual patches that are identified locally in image regions and can be mapped to a glossary of imaging terms. Further, the visual significance (e.g., visualness) of concepts is measured as the Shannon entropy of pixel values in image patches and is used to refine the feature vector. Moreover, the system can assist the user in interactively selecting a region-of-interest (ROI) and searching for similar image ROIs. Further, a spatial verification step is used as a postprocessing step to improve retrieval results based on location information. The hypothesis that such approaches would improve biomedical image retrieval is validated through experiments on two different data sets, which are collected from open access biomedical literature. PMID:26730398
van Weelden, Lisanne; Schilperoord, Joost; Swerts, Marc; Pecher, Diane
2015-01-01
Visual information contributes fundamentally to the process of object categorization. The present study investigated whether the degree of activation of visual information in this process is dependent on the contextual relevance of this information. We used the Proactive Interference (PI-release) paradigm. In four experiments, we manipulated the information by which objects could be categorized and subsequently be retrieved from memory. The pattern of PI-release showed that if objects could be stored and retrieved both by (non-perceptual) semantic and (perceptual) shape information, then shape information was overruled by semantic information. If, however, semantic information could not be (satisfactorily) used to store and retrieve objects, then objects were stored in memory in terms of their shape. The latter effect was found to be strongest for objects from identical semantic categories.
The Hype over Hyperbolic Browsers.
ERIC Educational Resources Information Center
Allen, Maryellen Mott
2002-01-01
Considers complaints about the usability in the human-computer interaction aspect of information retrieval and discusses information visualization, the Online Library of Information Visualization Environments, hyperbolic information structure, subject searching, real-world applications, relational databases and hyperbolic trees, and the future of…
Modeling the Time Course of Feature Perception and Feature Information Retrieval
ERIC Educational Resources Information Center
Kent, Christopher; Lamberts, Koen
2006-01-01
Three experiments investigated whether retrieval of information about different dimensions of a visual object varies as a function of the perceptual properties of those dimensions. The experiments involved two perception-based matching tasks and two retrieval-based matching tasks. A signal-to-respond methodology was used in all tasks. A stochastic…
Can Visualizing Document Space Improve Users' Information Foraging?
ERIC Educational Resources Information Center
Song, Min
1998-01-01
This study shows how users access relevant information in a visualized document space and determine whether BiblioMapper, a visualization tool, strengthens an information retrieval (IR) system and makes it more usable. BiblioMapper, developed for a CISI collection, was evaluated by accuracy, time, and user satisfaction. Users' navigation…
Encoding Modality Can Affect Memory Accuracy via Retrieval Orientation
ERIC Educational Resources Information Center
Pierce, Benton H.; Gallo, David A.
2011-01-01
Research indicates that false memory is lower following visual than auditory study, potentially because visual information is more distinctive. In the present study we tested the extent to which retrieval orientation can cause a modality effect on memory accuracy. Participants studied unrelated words in different modalities, followed by criterial…
The Ecological Approach to Text Visualization.
ERIC Educational Resources Information Center
Wise, James A.
1999-01-01
Presents both theoretical and technical bases on which to build a "science of text visualization." The Spatial Paradigm for Information Retrieval and Exploration (SPIRE) text-visualization system, which images information from free-text documents as natural terrains, serves as an example of the "ecological approach" in its visual metaphor, its…
Knowledge Retrieval Solutions.
ERIC Educational Resources Information Center
Khan, Kamran
1998-01-01
Excalibur RetrievalWare offers true knowledge retrieval solutions. Its fundamental technologies, Adaptive Pattern Recognition Processing and Semantic Networks, have capabilities for knowledge discovery and knowledge management of full-text, structured and visual information. The software delivers a combination of accuracy, extensibility,…
The effect of mood-context on visual recognition and recall memory.
Robinson, Sarita J; Rollings, Lucy J L
2011-01-01
Although it is widely known that memory is enhanced when encoding and retrieval occur in the same state, the impact of elevated stress/arousal is less understood. This study explores mood-dependent memory's effects on visual recognition and recall of material memorized either in a neutral mood or under higher stress/arousal levels. Participants' (N = 60) recognition and recall were assessed while they experienced either the same o a mismatched mood at retrieval. The results suggested that both visual recognition and recall memory were higher when participants experienced the same mood at encoding and retrieval compared with those who experienced a mismatch in mood context between encoding and retrieval. These findings offer support for a mood dependency effect on both the recognition and recall of visual information.
Webb, Christina E.; Turney, Indira C.; Dennis, Nancy A.
2017-01-01
The current study used a novel scene paradigm to investigate the role of encoding schemas on memory. Specifically, the study examined the influence of a strong encoding schema on retrieval of both schematic and non-schematic information, as well as false memories for information associated with the schema. Additionally, the separate roles of recollection and familiarity in both veridical and false memory retrieval were examined. The study identified several novel results. First, while many common neural regions mediated both schematic and non-schematic retrieval success, schematic recollection exhibited greater activation in visual cortex and hippocampus, regions commonly shown to mediate detailed retrieval. More effortful cognitive control regions in the prefrontal and parietal cortices, on the other hand, supported non-schematic recollection, while lateral temporal cortices supported familiarity-based retrieval of non-schematic items. Second, both true and false recollection, as well as familiarity, were mediated by activity in left middle temporal gyrus, a region associated with semantic processing and retrieval of schematic gist. Moreover, activity in this region was greater for both false recollection and false familiarity, suggesting a greater reliance on lateral temporal cortices for retrieval of illusory memories, irrespective of memory strength. Consistent with previous false memory studies, visual cortex showed increased activity for true compared to false recollection, suggesting that visual cortices are critical for distinguishing between previously viewed targets and related lures at retrieval. Additionally, the absence of common visual activity between true and false retrieval suggests that, unlike previous studies utilizing visual stimuli, when false memories are predicated on schematic gist and not perceptual overlap, there is little reliance on visual processes during false memory retrieval. Finally, the medial temporal lobe exhibited an interesting dissociation, showing greater activity for true compared to false recollection, as well as for false compared to true familiarity. These results provided an indication as to how different types of items are retrieved when studied within a highly schematic context. Results both replicate and extend previous true and false memory findings, supporting the Fuzzy Trace Theory. PMID:27697593
Webb, Christina E; Turney, Indira C; Dennis, Nancy A
2016-12-01
The current study used a novel scene paradigm to investigate the role of encoding schemas on memory. Specifically, the study examined the influence of a strong encoding schema on retrieval of both schematic and non-schematic information, as well as false memories for information associated with the schema. Additionally, the separate roles of recollection and familiarity in both veridical and false memory retrieval were examined. The study identified several novel results. First, while many common neural regions mediated both schematic and non-schematic retrieval success, schematic recollection exhibited greater activation in visual cortex and hippocampus, regions commonly shown to mediate detailed retrieval. More effortful cognitive control regions in the prefrontal and parietal cortices, on the other hand, supported non-schematic recollection, while lateral temporal cortices supported familiarity-based retrieval of non-schematic items. Second, both true and false recollection, as well as familiarity, were mediated by activity in left middle temporal gyrus, a region associated with semantic processing and retrieval of schematic gist. Moreover, activity in this region was greater for both false recollection and false familiarity, suggesting a greater reliance on lateral temporal cortices for retrieval of illusory memories, irrespective of memory strength. Consistent with previous false memory studies, visual cortex showed increased activity for true compared to false recollection, suggesting that visual cortices are critical for distinguishing between previously viewed targets and related lures at retrieval. Additionally, the absence of common visual activity between true and false retrieval suggests that, unlike previous studies utilizing visual stimuli, when false memories are predicated on schematic gist and not perceptual overlap, there is little reliance on visual processes during false memory retrieval. Finally, the medial temporal lobe exhibited an interesting dissociation, showing greater activity for true compared to false recollection, as well as for false compared to true familiarity. These results provided an indication as to how different types of items are retrieved when studied within a highly schematic context. Results both replicate and extend previous true and false memory findings, supporting the Fuzzy Trace Theory. Copyright © 2016 Elsevier Ltd. All rights reserved.
A Location Aware Middleware Framework for Collaborative Visual Information Discovery and Retrieval
2017-09-14
Information Discovery and Retrieval Andrew J.M. Compton Follow this and additional works at: https://scholar.afit.edu/etd Part of the Digital...and Dissertations by an authorized administrator of AFIT Scholar. For more information , please contact richard.mansfield@afit.edu. Recommended Citation...
A Novel Image Retrieval Based on Visual Words Integration of SIFT and SURF
Ali, Nouman; Bajwa, Khalid Bashir; Sablatnig, Robert; Chatzichristofis, Savvas A.; Iqbal, Zeshan; Rashid, Muhammad; Habib, Hafiz Adnan
2016-01-01
With the recent evolution of technology, the number of image archives has increased exponentially. In Content-Based Image Retrieval (CBIR), high-level visual information is represented in the form of low-level features. The semantic gap between the low-level features and the high-level image concepts is an open research problem. In this paper, we present a novel visual words integration of Scale Invariant Feature Transform (SIFT) and Speeded-Up Robust Features (SURF). The two local features representations are selected for image retrieval because SIFT is more robust to the change in scale and rotation, while SURF is robust to changes in illumination. The visual words integration of SIFT and SURF adds the robustness of both features to image retrieval. The qualitative and quantitative comparisons conducted on Corel-1000, Corel-1500, Corel-2000, Oliva and Torralba and Ground Truth image benchmarks demonstrate the effectiveness of the proposed visual words integration. PMID:27315101
An Empirical Comparison of Visualization Tools To Assist Information Retrieval on the Web.
ERIC Educational Resources Information Center
Heo, Misook; Hirtle, Stephen C.
2001-01-01
Discusses problems with navigation in hypertext systems, including cognitive overload, and describes a study that tested information visualization techniques to see which best represented the underlying structure of Web space. Considers the effects of visualization techniques on user performance on information searching tasks and the effects of…
Image Location Estimation by Salient Region Matching.
Qian, Xueming; Zhao, Yisi; Han, Junwei
2015-11-01
Nowadays, locations of images have been widely used in many application scenarios for large geo-tagged image corpora. As to images which are not geographically tagged, we estimate their locations with the help of the large geo-tagged image set by content-based image retrieval. In this paper, we exploit spatial information of useful visual words to improve image location estimation (or content-based image retrieval performances). We proposed to generate visual word groups by mean-shift clustering. To improve the retrieval performance, spatial constraint is utilized to code the relative position of visual words. We proposed to generate a position descriptor for each visual word and build fast indexing structure for visual word groups. Experiments show the effectiveness of our proposed approach.
ERIC Educational Resources Information Center
Cole, Charles; Mandelblatt, Bertie; Stevenson, John
2002-01-01
Discusses high recall search strategies for undergraduates and how to overcome information overload that results. Highlights include word-based versus visual-based schemes; five summarization and visualization schemes for presenting information retrieval citation output; and results of a study that recommend visualization schemes geared toward…
A Tool for the Analysis of Motion Picture Film or Video Tape.
ERIC Educational Resources Information Center
Ekman, Paul; Friesen, Wallace V.
1969-01-01
A visual information display and retrieval system (VID-R) is described for application to visual records. VID-R searches and retrieves events by time address (location) or by previously stored ovservations or measurements. Fields are labeled by writing discriminable binary addresses on the horizontal lines outside the normal viewing area. The…
Mobile Visual Search Based on Histogram Matching and Zone Weight Learning
NASA Astrophysics Data System (ADS)
Zhu, Chuang; Tao, Li; Yang, Fan; Lu, Tao; Jia, Huizhu; Xie, Xiaodong
2018-01-01
In this paper, we propose a novel image retrieval algorithm for mobile visual search. At first, a short visual codebook is generated based on the descriptor database to represent the statistical information of the dataset. Then, an accurate local descriptor similarity score is computed by merging the tf-idf weighted histogram matching and the weighting strategy in compact descriptors for visual search (CDVS). At last, both the global descriptor matching score and the local descriptor similarity score are summed up to rerank the retrieval results according to the learned zone weights. The results show that the proposed approach outperforms the state-of-the-art image retrieval method in CDVS.
Listen up, eye movements play a role in verbal memory retrieval.
Scholz, Agnes; Mehlhorn, Katja; Krems, Josef F
2016-01-01
People fixate on blank spaces if visual stimuli previously occupied these regions of space. This so-called "looking at nothing" (LAN) phenomenon is said to be a part of information retrieval from internal memory representations, but the exact nature of the relationship between LAN and memory retrieval is unclear. While evidence exists for an influence of LAN on memory retrieval for visuospatial stimuli, evidence for verbal information is mixed. Here, we tested the relationship between LAN behavior and memory retrieval in an episodic retrieval task where verbal information was presented auditorily during encoding. When participants were allowed to gaze freely during subsequent memory retrieval, LAN occurred, and it was stronger for correct than for incorrect responses. When eye movements were manipulated during memory retrieval, retrieval performance was higher when participants fixated on the area associated with to-be-retrieved information than when fixating on another area. Our results provide evidence for a functional relationship between LAN and memory retrieval that extends to verbal information.
RAVEL: retrieval and visualization in ELectronic health records.
Thiessard, Frantz; Mougin, Fleur; Diallo, Gayo; Jouhet, Vianney; Cossin, Sébastien; Garcelon, Nicolas; Campillo, Boris; Jouini, Wassim; Grosjean, Julien; Massari, Philippe; Griffon, Nicolas; Dupuch, Marie; Tayalati, Fayssal; Dugas, Edwige; Balvet, Antonio; Grabar, Natalia; Pereira, Suzanne; Frandji, Bruno; Darmoni, Stefan; Cuggia, Marc
2012-01-01
Because of the ever-increasing amount of information in patients' EHRs, healthcare professionals may face difficulties for making diagnoses and/or therapeutic decisions. Moreover, patients may misunderstand their health status. These medical practitioners need effective tools to locate in real time relevant elements within the patients' EHR and visualize them according to synthetic and intuitive presentation models. The RAVEL project aims at achieving this goal by performing a high profile industrial research and development program on the EHR considering the following areas: (i) semantic indexing, (ii) information retrieval, and (iii) data visualization. The RAVEL project is expected to implement a generic, loosely coupled to data sources prototype so that it can be transposed into different university hospitals information systems.
Souza, Alessandra S; Rerko, Laura; Oberauer, Klaus
2016-06-01
Visual working memory (VWM) has a limited capacity. This limitation can be mitigated by the use of focused attention: if attention is drawn to the relevant working memory content before test, performance improves (the so-called retro-cue benefit). This study tests 2 explanations of the retro-cue benefit: (a) Focused attention protects memory representations from interference by visual input at test, and (b) focusing attention enhances retrieval. Across 6 experiments using color recognition and color reproduction tasks, we varied the amount of color interference at test, and the delay between a retrieval cue (i.e., the retro-cue) and the memory test. Retro-cue benefits were larger when the memory test introduced interfering visual stimuli, showing that the retro-cue effect is in part because of protection from visual interference. However, when visual interference was held constant, retro-cue benefits were still obtained whenever the retro-cue enabled retrieval of an object from VWM but delayed response selection. Our results show that accessible information in VWM might be lost in the processes of testing memory because of visual interference and incomplete retrieval. This is not an inevitable state of affairs, though: Focused attention can be used to get the most out of VWM. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Multiple Object Retrieval in Image Databases Using Hierarchical Segmentation Tree
ERIC Educational Resources Information Center
Chen, Wei-Bang
2012-01-01
The purpose of this research is to develop a new visual information analysis, representation, and retrieval framework for automatic discovery of salient objects of user's interest in large-scale image databases. In particular, this dissertation describes a content-based image retrieval framework which supports multiple-object retrieval. The…
Beyond Information Retrieval: Ways To Provide Content in Context.
ERIC Educational Resources Information Center
Wiley, Deborah Lynne
1998-01-01
Provides an overview of information retrieval from mainframe systems to Web search engines; discusses collaborative filtering, data extraction, data visualization, agent technology, pattern recognition, classification and clustering, and virtual communities. Argues that rather than huge data-storage centers and proprietary software, we need…
Multimedia Information Retrieval Literature Review
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wong, Pak C.; Bohn, Shawn J.; Payne, Deborah A.
This survey paper highlights some of the recent, influential work in multimedia information retrieval (MIR). MIR is a branch area of multimedia (MM). The young and fast-growing area has received strong industrial and academic support in the United States and around the world (see Section 7 for a list of major conferences and journals of the community). The term "information retrieval" may be misleading to those with different computer science or information technology backgrounds. As shown in our discussion later, it indeed includes topics from user interaction, data analytics, machine learning, feature extraction, information visualization, and more.
Reduced effects of pictorial distinctiveness on false memory following dynamic visual noise.
Parker, Andrew; Kember, Timothy; Dagnall, Neil
2017-07-01
High levels of false recognition for non-presented items typically occur following exposure to lists of associated words. These false recognition effects can be reduced by making the studied items more distinctive by the presentation of pictures during encoding. One explanation of this is that during recognition, participants expect or attempt to retrieve distinctive pictorial information in order to evaluate the study status of the test item. If this involves the retrieval and use of visual imagery, then interfering with imagery processing should reduce the effectiveness of pictorial information in false memory reduction. In the current experiment, visual-imagery processing was disrupted at retrieval by the use of dynamic visual noise (DVN). It was found that effects of DVN dissociated true from false memory. Memory for studied words was not influenced by the presence of an interfering noise field. However, false memory was increased and the effects of picture-induced distinctiveness was eliminated. DVN also increased false recollection and remember responses to unstudied items.
Learning semantic and visual similarity for endomicroscopy video retrieval.
Andre, Barbara; Vercauteren, Tom; Buchner, Anna M; Wallace, Michael B; Ayache, Nicholas
2012-06-01
Content-based image retrieval (CBIR) is a valuable computer vision technique which is increasingly being applied in the medical community for diagnosis support. However, traditional CBIR systems only deliver visual outputs, i.e., images having a similar appearance to the query, which is not directly interpretable by the physicians. Our objective is to provide a system for endomicroscopy video retrieval which delivers both visual and semantic outputs that are consistent with each other. In a previous study, we developed an adapted bag-of-visual-words method for endomicroscopy retrieval, called "Dense-Sift," that computes a visual signature for each video. In this paper, we present a novel approach to complement visual similarity learning with semantic knowledge extraction, in the field of in vivo endomicroscopy. We first leverage a semantic ground truth based on eight binary concepts, in order to transform these visual signatures into semantic signatures that reflect how much the presence of each semantic concept is expressed by the visual words describing the videos. Using cross-validation, we demonstrate that, in terms of semantic detection, our intuitive Fisher-based method transforming visual-word histograms into semantic estimations outperforms support vector machine (SVM) methods with statistical significance. In a second step, we propose to improve retrieval relevance by learning an adjusted similarity distance from a perceived similarity ground truth. As a result, our distance learning method allows to statistically improve the correlation with the perceived similarity. We also demonstrate that, in terms of perceived similarity, the recall performance of the semantic signatures is close to that of visual signatures and significantly better than those of several state-of-the-art CBIR methods. The semantic signatures are thus able to communicate high-level medical knowledge while being consistent with the low-level visual signatures and much shorter than them. In our resulting retrieval system, we decide to use visual signatures for perceived similarity learning and retrieval, and semantic signatures for the output of an additional information, expressed in the endoscopist own language, which provides a relevant semantic translation of the visual retrieval outputs.
Fortier-Gauthier, Ulysse; Moffat, Nicolas; Dell'Acqua, Roberto; McDonald, John J; Jolicœur, Pierre
2012-07-01
We studied brain activity during retention and retrieval phases of two visual short-term memory (VSTM) experiments. Experiment 1 used a balanced memory array, with one color stimulus in each hemifield, followed by a retention interval and a central probe, at the fixation point that designated the target stimulus in memory about which to make a determination of orientation. Retrieval of information from VSTM was associated with an event-related lateralization (ERL) with a contralateral negativity relative to the visual field from which the probed stimulus was originally encoded, suggesting a lateralized organization of VSTM. The scalp distribution of the retrieval ERL was more anterior than what is usually associated with simple maintenance activity, which is consistent with the involvement of different brain structures for these distinct visual memory mechanisms. Experiment 2 was like Experiment 1, but used an unbalanced memory array consisting of one lateral color stimulus in a hemifield and one color stimulus on the vertical mid-line. This design enabled us to separate lateralized activity related to target retrieval from distractor processing. Target retrieval was found to generate a negative-going ERL at electrode sites found in Experiment 1, and suggested representations were retrieved from anterior cortical structures. Distractor processing elicited a positive-going ERL at posterior electrodes sites, which could be indicative of a return to baseline of retention activity for the discarded memory of the now-irrelevant stimulus, or an active inhibition mechanism mediating distractor suppression. Copyright © 2012 Elsevier Ltd. All rights reserved.
Does scene context always facilitate retrieval of visual object representations?
Nakashima, Ryoichi; Yokosawa, Kazuhiko
2011-04-01
An object-to-scene binding hypothesis maintains that visual object representations are stored as part of a larger scene representation or scene context, and that scene context facilitates retrieval of object representations (see, e.g., Hollingworth, Journal of Experimental Psychology: Learning, Memory and Cognition, 32, 58-69, 2006). Support for this hypothesis comes from data using an intentional memory task. In the present study, we examined whether scene context always facilitates retrieval of visual object representations. In two experiments, we investigated whether the scene context facilitates retrieval of object representations, using a new paradigm in which a memory task is appended to a repeated-flicker change detection task. Results indicated that in normal scene viewing, in which many simultaneous objects appear, scene context facilitation of the retrieval of object representations-henceforth termed object-to-scene binding-occurred only when the observer was required to retain much information for a task (i.e., an intentional memory task).
Deployment of spatial attention towards locations in memory representations. An EEG study.
Leszczyński, Marcin; Wykowska, Agnieszka; Perez-Osorio, Jairo; Müller, Hermann J
2013-01-01
Recalling information from visual short-term memory (VSTM) involves the same neural mechanisms as attending to an actually perceived scene. In particular, retrieval from VSTM has been associated with orienting of visual attention towards a location within a spatially-organized memory representation. However, an open question concerns whether spatial attention is also recruited during VSTM retrieval even when performing the task does not require access to spatial coordinates of items in the memorized scene. The present study combined a visual search task with a modified, delayed central probe protocol, together with EEG analysis, to answer this question. We found a temporal contralateral negativity (TCN) elicited by a centrally presented go-signal which was spatially uninformative and featurally unrelated to the search target and informed participants only about a response key that they had to press to indicate a prepared target-present vs. -absent decision. This lateralization during VSTM retrieval (TCN) provides strong evidence of a shift of attention towards the target location in the memory representation, which occurred despite the fact that the present task required no spatial (or featural) information from the search to be encoded, maintained, and retrieved to produce the correct response and that the go-signal did not itself specify any information relating to the location and defining feature of the target.
Reppa, I; Williams, K E; Worth, E R; Greville, W J; Saunders, J
2017-11-01
Retrieval of target information can cause forgetting for related, but non-retrieved, information - retrieval-induced forgetting (RIF). The aim of the current studies was to examine a key prediction of the inhibitory account of RIF - interference dependence - whereby 'strong' non-retrieved items are more likely to interfere during retrieval and therefore, are more susceptible to RIF. Using visual objects allowed us to examine and contrast one index of item strength -object typicality, that is, how typical of its category an object is. Experiment 1 provided proof of concept for our variant of the recognition practice paradigm. Experiment 2 tested the prediction of the inhibitory account that the magnitude of RIF for natural visual objects would be dependent on item strength. Non-typical objects were more memorable overall than typical objects. We found that object memorability (as determined by typicality) influenced RIF with significant forgetting occurring for the memorable (non-typical), but not non-memorable (typical), objects. The current findings strongly support an inhibitory account of retrieval-induced forgetting. Copyright © 2017 Elsevier B.V. All rights reserved.
Chen, Yang; Ren, Xiaofeng; Zhang, Guo-Qiang; Xu, Rong
2013-01-01
Visual information is a crucial aspect of medical knowledge. Building a comprehensive medical image base, in the spirit of the Unified Medical Language System (UMLS), would greatly benefit patient education and self-care. However, collection and annotation of such a large-scale image base is challenging. To combine visual object detection techniques with medical ontology to automatically mine web photos and retrieve a large number of disease manifestation images with minimal manual labeling effort. As a proof of concept, we first learnt five organ detectors on three detection scales for eyes, ears, lips, hands, and feet. Given a disease, we used information from the UMLS to select affected body parts, ran the pretrained organ detectors on web images, and combined the detection outputs to retrieve disease images. Compared with a supervised image retrieval approach that requires training images for every disease, our ontology-guided approach exploits shared visual information of body parts across diseases. In retrieving 2220 web images of 32 diseases, we reduced manual labeling effort to 15.6% while improving the average precision by 3.9% from 77.7% to 81.6%. For 40.6% of the diseases, we improved the precision by 10%. The results confirm the concept that the web is a feasible source for automatic disease image retrieval for health image database construction. Our approach requires a small amount of manual effort to collect complex disease images, and to annotate them by standard medical ontology terms.
The Effects of Emotional Visual Context on the Encoding and Retrieval of Body Odor Information.
Parma, Valentina; Macedo, Stephanie; Rocha, Marta; Alho, Laura; Ferreira, Jacqueline; Soares, Sandra C
2018-04-01
Conditions during information encoding and retrieval are known to influence the sensory material stored and its recapitulation. However, little is known about such processes in olfaction. Here, we capitalized on the uniqueness of body odors (BOs) which, similar to fingerprints, allow for the identification of a specific person, by associating their presentation to a negative or a neutral emotional context. One hundred twenty-five receivers (68 F) were exposed to a male BO while watching either criminal or neutral videos (encoding phase) and were subsequently asked to recognize the target BO within either a congruent or an incongruent visual context (retrieval phase). The results showed that criminal videos were rated as more vivid, unpleasant, and arousing than neutral videos both at encoding and retrieval. Moreover, in terms of BO ratings, we found that odor intensity and arousal allow to distinguish the target from the foils when congruent criminal information is presented at encoding and retrieval. Finally, the accuracy performance was not significantly different from chance level for either condition. These findings provide insights on how olfactory memories are processed in emotional situations.
An fMRI Study of Episodic Memory: Retrieval of Object, Spatial, and Temporal Information
Hayes, Scott M.; Ryan, Lee; Schnyer, David M.; Nadel, Lynn
2011-01-01
Sixteen participants viewed a videotaped tour of 4 houses, highlighting a series of objects and their spatial locations. Participants were tested for memory of object, spatial, and temporal order information while undergoing functional Magnetic Resonance Imaging. Preferential activation was observed in right parahippocampal gyrus during the retrieval of spatial location information. Retrieval of contextual information (spatial location and temporal order) was associated with activation in right dorsolateral prefrontal cortex. In bilateral posterior parietal regions, greater activation was associated with processing of visual scenes, regardless of the memory judgment. These findings support current theories positing roles for frontal and medial temporal regions during episodic retrieval and suggest a specific role for the hippocampal complex in the retrieval of spatial location information PMID:15506871
Millman, Zachary B; Goss, James; Schiffman, Jason; Mejias, Johana; Gupta, Tina; Mittal, Vijay A
2014-09-01
Gesture is integrally linked with language and cognitive systems, and recent years have seen a growing attention to these movements in patients with schizophrenia. To date, however, there have been no investigations of gesture in youth at ultra high risk (UHR) for psychosis. Examining gesture in UHR individuals may help to elucidate other widely recognized communicative and cognitive deficits in this population and yield new clues for treatment development. In this study, mismatch (indicating semantic incongruency between the content of speech and a given gesture) and retrieval (used during pauses in speech while a person appears to be searching for a word or idea) gestures were evaluated in 42 UHR individuals and 36 matched healthy controls. Cognitive functions relevant to gesture production (i.e., speed of visual information processing and verbal production) as well as positive and negative symptomatologies were assessed. Although the overall frequency of cases exhibiting these behaviors was low, UHR individuals produced substantially more mismatch and retrieval gestures than controls. The UHR group also exhibited significantly poorer verbal production performance when compared with controls. In the patient group, mismatch gestures were associated with poorer visual processing speed and elevated negative symptoms, while retrieval gestures were associated with higher speed of visual information-processing and verbal production, but not symptoms. Taken together these findings indicate that gesture abnormalities are present in individuals at high risk for psychosis. While mismatch gestures may be closely related to disease processes, retrieval gestures may be employed as a compensatory mechanism. Copyright © 2014 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Fortier-Gauthier, Ulysse; Moffat, Nicolas; Dell'Acqua, Robert; McDonald, John J.; Jolicoeur, Pierre
2012-01-01
We studied brain activity during retention and retrieval phases of two visual short-term memory (VSTM) experiments. Experiment 1 used a balanced memory array, with one color stimulus in each hemifield, followed by a retention interval and a central probe, at the fixation point that designated the target stimulus in memory about which to make a…
NASA Technical Reports Server (NTRS)
Dominick, Wayne D. (Editor); Hall, Philip P.
1985-01-01
This Working Paper Series entry represents a collection of presentation visuals associated with the companion report entitled, The Design of PC/MISI, a PC-Based Common User Interface to Remote Information Storage and Retrieval Systems, USL/DBMS NASA/RECON Working Paper Series report number DBMS.NASA/RECON-15. The paper discusses the following: problem definition; the PC solution; the goals of system design; the design description; future considerations, the research environment; conclusions.
Azulay, Haim; Striem, Ella; Amedi, Amir
2009-05-01
People tend to close their eyes when trying to retrieve an event or a visual image from memory. However the brain mechanisms behind this phenomenon remain poorly understood. Recently, we showed that during visual mental imagery, auditory areas show a much more robust deactivation than during visual perception. Here we ask whether this is a special case of a more general phenomenon involving retrieval of intrinsic, internally stored information, which would result in crossmodal deactivations in other sensory cortices which are irrelevant to the task at hand. To test this hypothesis, a group of 9 sighted individuals were scanned while performing a memory retrieval task for highly abstract words (i.e., with low imaginability scores). We also scanned a group of 10 congenitally blind, which by definition do not have any visual imagery per se. In sighted subjects, both auditory and visual areas were robustly deactivated during memory retrieval, whereas in the blind the auditory cortex was deactivated while visual areas, shown previously to be relevant for this task, presented a positive BOLD signal. These results suggest that deactivation may be most prominent in task-irrelevant sensory cortices whenever there is a need for retrieval or manipulation of internally stored representations. Thus, there is a task-dependent balance of activation and deactivation that might allow maximization of resources and filtering out of non relevant information to enable allocation of attention to the required task. Furthermore, these results suggest that the balance between positive and negative BOLD might be crucial to our understanding of a large variety of intrinsic and extrinsic tasks including high-level cognitive functions, sensory processing and multisensory integration.
Alor-Hernández, Giner; Pérez-Gallardo, Yuliana; Posada-Gómez, Rubén; Cortes-Robles, Guillermo; Rodríguez-González, Alejandro; Aguilar-Laserre, Alberto A
2012-09-01
Nowadays, traditional search engines such as Google, Yahoo and Bing facilitate the retrieval of information in the format of images, but the results are not always useful for the users. This is mainly due to two problems: (1) the semantic keywords are not taken into consideration and (2) it is not always possible to establish a query using the image features. This issue has been covered in different domains in order to develop content-based image retrieval (CBIR) systems. The expert community has focussed their attention on the healthcare domain, where a lot of visual information for medical analysis is available. This paper provides a solution called iPixel Visual Search Engine, which involves semantics and content issues in order to search for digitized mammograms. iPixel offers the possibility of retrieving mammogram features using collective intelligence and implementing a CBIR algorithm. Our proposal compares not only features with similar semantic meaning, but also visual features. In this sense, the comparisons are made in different ways: by the number of regions per image, by maximum and minimum size of regions per image and by average intensity level of each region. iPixel Visual Search Engine supports the medical community in differential diagnoses related to the diseases of the breast. The iPixel Visual Search Engine has been validated by experts in the healthcare domain, such as radiologists, in addition to experts in digital image analysis.
Xu, Yingying; Lin, Lanfen; Hu, Hongjie; Wang, Dan; Zhu, Wenchao; Wang, Jian; Han, Xian-Hua; Chen, Yen-Wei
2018-01-01
The bag of visual words (BoVW) model is a powerful tool for feature representation that can integrate various handcrafted features like intensity, texture, and spatial information. In this paper, we propose a novel BoVW-based method that incorporates texture and spatial information for the content-based image retrieval to assist radiologists in clinical diagnosis. This paper presents a texture-specific BoVW method to represent focal liver lesions (FLLs). Pixels in the region of interest (ROI) are classified into nine texture categories using the rotation-invariant uniform local binary pattern method. The BoVW-based features are calculated for each texture category. In addition, a spatial cone matching (SCM)-based representation strategy is proposed to describe the spatial information of the visual words in the ROI. In a pilot study, eight radiologists with different clinical experience performed diagnoses for 20 cases with and without the top six retrieved results. A total of 132 multiphase computed tomography volumes including five pathological types were collected. The texture-specific BoVW was compared to other BoVW-based methods using the constructed dataset of FLLs. The results show that our proposed model outperforms the other three BoVW methods in discriminating different lesions. The SCM method, which adds spatial information to the orderless BoVW model, impacted the retrieval performance. In the pilot trial, the average diagnosis accuracy of the radiologists was improved from 66 to 80% using the retrieval system. The preliminary results indicate that the texture-specific features and the SCM-based BoVW features can effectively characterize various liver lesions. The retrieval system has the potential to improve the diagnostic accuracy and the confidence of the radiologists.
Semantic-based surveillance video retrieval.
Hu, Weiming; Xie, Dan; Fu, Zhouyu; Zeng, Wenrong; Maybank, Steve
2007-04-01
Visual surveillance produces large amounts of video data. Effective indexing and retrieval from surveillance video databases are very important. Although there are many ways to represent the content of video clips in current video retrieval algorithms, there still exists a semantic gap between users and retrieval systems. Visual surveillance systems supply a platform for investigating semantic-based video retrieval. In this paper, a semantic-based video retrieval framework for visual surveillance is proposed. A cluster-based tracking algorithm is developed to acquire motion trajectories. The trajectories are then clustered hierarchically using the spatial and temporal information, to learn activity models. A hierarchical structure of semantic indexing and retrieval of object activities, where each individual activity automatically inherits all the semantic descriptions of the activity model to which it belongs, is proposed for accessing video clips and individual objects at the semantic level. The proposed retrieval framework supports various queries including queries by keywords, multiple object queries, and queries by sketch. For multiple object queries, succession and simultaneity restrictions, together with depth and breadth first orders, are considered. For sketch-based queries, a method for matching trajectories drawn by users to spatial trajectories is proposed. The effectiveness and efficiency of our framework are tested in a crowded traffic scene.
Visual Complexity and Pictorial Memory: A Fifteen Year Research Perspective.
ERIC Educational Resources Information Center
Berry, Louis H.
For 15 years an ongoing research project at the University of Pittsburgh has focused on the effects of variations in visual complexity and color on the storage and retrieval of visual information by learners. Research has shown that visual materials facilitate instruction, but has not fully delineated the interactions of visual complexity and…
Fougnie, Daryl; Marois, René
2009-01-01
The concurrent maintenance of two visual working memory (VWM) arrays can lead to profound interference. It is unclear, however, whether these costs arise from limitations in VWM storage capacity (Fougnie & Marois, 2006), or from interference between the storage of one visual array and encoding or retrieval of another visual array (Cowan & Morey, 2007). Here, we show that encoding a VWM array does not interfere with maintenance of another VWM array unless the two displays exceed maintenance capacity (Experiments 1 and 2). Moreover, manipulating the extent to which encoding and maintenance can interfere with one another had no discernable effect on dual-task performance (Experiment 2). Finally, maintenance of a VWM array was not affected by retrieval of information from another VWM array (Experiment 3). Taken together, these findings demonstrate that dual-task interference between two concurrent VWM tasks is due to a capacity-limited store that is independent from encoding and retrieval processes. PMID:19933566
Does constraining memory maintenance reduce visual search efficiency?
Buttaccio, Daniel R; Lange, Nicholas D; Thomas, Rick P; Dougherty, Michael R
2018-03-01
We examine whether constraining memory retrieval processes affects performance in a cued recall visual search task. In the visual search task, participants are first presented with a memory prompt followed by a search array. The memory prompt provides diagnostic information regarding a critical aspect of the target (its colour). We assume that upon the presentation of the memory prompt, participants retrieve and maintain hypotheses (i.e., potential target characteristics) in working memory in order to improve their search efficiency. By constraining retrieval through the manipulation of time pressure (Experiments 1A and 1B) or a concurrent working memory task (Experiments 2A, 2B, and 2C), we directly test the involvement of working memory in visual search. We find some evidence that visual search is less efficient under conditions in which participants were likely to be maintaining fewer hypotheses in working memory (Experiments 1A, 2A, and 2C), suggesting that the retrieval of representations from long-term memory into working memory can improve visual search. However, these results should be interpreted with caution, as the data from two experiments (Experiments 1B and 2B) did not lend support for this conclusion.
Coordinating Council. Tenth Meeting: Information retrieval: The role of controlled vocabularies
NASA Technical Reports Server (NTRS)
1993-01-01
The theme of this NASA Scientific and Technical Information Program Coordinating Council meeting was the role of controlled vocabularies (thesauri) in information retrieval. Included are summaries of the presentations and the accompanying visuals. Dr. Raya Fidel addressed 'Retrieval: Free Text, Full Text, and Controlled Vocabularies.' Dr. Bella Hass Weinberg spoke on 'Controlled Vocabularies and Thesaurus Standards.' The presentations were followed by a panel discussion with participation from NASA, the National Library of Medicine, the Defense Technical Information Center, and the Department of Energy; this discussion, however, is not summarized in any detail in this document.
Occam's razor: supporting visual query expression for content-based image queries
NASA Astrophysics Data System (ADS)
Venters, Colin C.; Hartley, Richard J.; Hewitt, William T.
2005-01-01
This paper reports the results of a usability experiment that investigated visual query formulation on three dimensions: effectiveness, efficiency, and user satisfaction. Twenty eight evaluation sessions were conducted in order to assess the extent to which query by visual example supports visual query formulation in a content-based image retrieval environment. In order to provide a context and focus for the investigation, the study was segmented by image type, user group, and use function. The image type consisted of a set of abstract geometric device marks supplied by the UK Trademark Registry. Users were selected from the 14 UK Patent Information Network offices. The use function was limited to the retrieval of images by shape similarity. Two client interfaces were developed for comparison purposes: Trademark Image Browser Engine (TRIBE) and Shape Query Image Retrieval Systems Engine (SQUIRE).
Occam"s razor: supporting visual query expression for content-based image queries
NASA Astrophysics Data System (ADS)
Venters, Colin C.; Hartley, Richard J.; Hewitt, William T.
2004-12-01
This paper reports the results of a usability experiment that investigated visual query formulation on three dimensions: effectiveness, efficiency, and user satisfaction. Twenty eight evaluation sessions were conducted in order to assess the extent to which query by visual example supports visual query formulation in a content-based image retrieval environment. In order to provide a context and focus for the investigation, the study was segmented by image type, user group, and use function. The image type consisted of a set of abstract geometric device marks supplied by the UK Trademark Registry. Users were selected from the 14 UK Patent Information Network offices. The use function was limited to the retrieval of images by shape similarity. Two client interfaces were developed for comparison purposes: Trademark Image Browser Engine (TRIBE) and Shape Query Image Retrieval Systems Engine (SQUIRE).
Creating a classification of image types in the medical literature for visual categorization
NASA Astrophysics Data System (ADS)
Müller, Henning; Kalpathy-Cramer, Jayashree; Demner-Fushman, Dina; Antani, Sameer
2012-02-01
Content-based image retrieval (CBIR) from specialized collections has often been proposed for use in such areas as diagnostic aid, clinical decision support, and teaching. The visual retrieval from broad image collections such as teaching files, the medical literature or web images, by contrast, has not yet reached a high maturity level compared to textual information retrieval. Visual image classification into a relatively small number of classes (20-100) on the other hand, has shown to deliver good results in several benchmarks. It is, however, currently underused as a basic technology for retrieval tasks, for example, to limit the search space. Most classification schemes for medical images are focused on specific areas and consider mainly the medical image types (modalities), imaged anatomy, and view, and merge them into a single descriptor or classification hierarchy. Furthermore, they often ignore other important image types such as biological images, statistical figures, flowcharts, and diagrams that frequently occur in the biomedical literature. Most of the current classifications have also been created for radiology images, which are not the only types to be taken into account. With Open Access becoming increasingly widespread particularly in medicine, images from the biomedical literature are more easily available for use. Visual information from these images and knowledge that an image is of a specific type or medical modality could enrich retrieval. This enrichment is hampered by the lack of a commonly agreed image classification scheme. This paper presents a hierarchy for classification of biomedical illustrations with the goal of using it for visual classification and thus as a basis for retrieval. The proposed hierarchy is based on relevant parts of existing terminologies, such as the IRMA-code (Image Retrieval in Medical Applications), ad hoc classifications and hierarchies used in imageCLEF (Image retrieval task at the Cross-Language Evaluation Forum) and NLM's (National Library of Medicine) OpenI. Furtheron, mappings to NLM's MeSH (Medical Subject Headings), RSNA's RadLex (Radiological Society of North America, Radiology Lexicon), and the IRMA code are also attempted for relevant image types. Advantages derived from such hierarchical classification for medical image retrieval are being evaluated through benchmarks such as imageCLEF, and R&D systems such as NLM's OpenI. The goal is to extend this hierarchy progressively and (through adding image types occurring in the biomedical literature) to have a terminology for visual image classification based on image types distinguishable by visual means and occurring in the medical open access literature.
Multimodal Feature Integration in the Angular Gyrus during Episodic and Semantic Retrieval
Bonnici, Heidi M.; Richter, Franziska R.; Yazar, Yasemin
2016-01-01
Much evidence from distinct lines of investigation indicates the involvement of angular gyrus (AnG) in the retrieval of both episodic and semantic information, but the region's precise function and whether that function differs across episodic and semantic retrieval have yet to be determined. We used univariate and multivariate fMRI analysis methods to examine the role of AnG in multimodal feature integration during episodic and semantic retrieval. Human participants completed episodic and semantic memory tasks involving unimodal (auditory or visual) and multimodal (audio-visual) stimuli. Univariate analyses revealed the recruitment of functionally distinct AnG subregions during the retrieval of episodic and semantic information. Consistent with a role in multimodal feature integration during episodic retrieval, significantly greater AnG activity was observed during retrieval of integrated multimodal episodic memories compared with unimodal episodic memories. Multivariate classification analyses revealed that individual multimodal episodic memories could be differentiated in AnG, with classification accuracy tracking the vividness of participants' reported recollections, whereas distinct unimodal memories were represented in sensory association areas only. In contrast to episodic retrieval, AnG was engaged to a statistically equivalent degree during retrieval of unimodal and multimodal semantic memories, suggesting a distinct role for AnG during semantic retrieval. Modality-specific sensory association areas exhibited corresponding activity during both episodic and semantic retrieval, which mirrored the functional specialization of these regions during perception. The results offer new insights into the integrative processes subserved by AnG and its contribution to our subjective experience of remembering. SIGNIFICANCE STATEMENT Using univariate and multivariate fMRI analyses, we provide evidence that functionally distinct subregions of angular gyrus (AnG) contribute to the retrieval of episodic and semantic memories. Our multivariate pattern classifier could distinguish episodic memory representations in AnG according to whether they were multimodal (audio-visual) or unimodal (auditory or visual) in nature, whereas statistically equivalent AnG activity was observed during retrieval of unimodal and multimodal semantic memories. Classification accuracy during episodic retrieval scaled with the trial-by-trial vividness with which participants experienced their recollections. Therefore, the findings offer new insights into the integrative processes subserved by AnG and how its function may contribute to our subjective experience of remembering. PMID:27194327
Multimodal Feature Integration in the Angular Gyrus during Episodic and Semantic Retrieval.
Bonnici, Heidi M; Richter, Franziska R; Yazar, Yasemin; Simons, Jon S
2016-05-18
Much evidence from distinct lines of investigation indicates the involvement of angular gyrus (AnG) in the retrieval of both episodic and semantic information, but the region's precise function and whether that function differs across episodic and semantic retrieval have yet to be determined. We used univariate and multivariate fMRI analysis methods to examine the role of AnG in multimodal feature integration during episodic and semantic retrieval. Human participants completed episodic and semantic memory tasks involving unimodal (auditory or visual) and multimodal (audio-visual) stimuli. Univariate analyses revealed the recruitment of functionally distinct AnG subregions during the retrieval of episodic and semantic information. Consistent with a role in multimodal feature integration during episodic retrieval, significantly greater AnG activity was observed during retrieval of integrated multimodal episodic memories compared with unimodal episodic memories. Multivariate classification analyses revealed that individual multimodal episodic memories could be differentiated in AnG, with classification accuracy tracking the vividness of participants' reported recollections, whereas distinct unimodal memories were represented in sensory association areas only. In contrast to episodic retrieval, AnG was engaged to a statistically equivalent degree during retrieval of unimodal and multimodal semantic memories, suggesting a distinct role for AnG during semantic retrieval. Modality-specific sensory association areas exhibited corresponding activity during both episodic and semantic retrieval, which mirrored the functional specialization of these regions during perception. The results offer new insights into the integrative processes subserved by AnG and its contribution to our subjective experience of remembering. Using univariate and multivariate fMRI analyses, we provide evidence that functionally distinct subregions of angular gyrus (AnG) contribute to the retrieval of episodic and semantic memories. Our multivariate pattern classifier could distinguish episodic memory representations in AnG according to whether they were multimodal (audio-visual) or unimodal (auditory or visual) in nature, whereas statistically equivalent AnG activity was observed during retrieval of unimodal and multimodal semantic memories. Classification accuracy during episodic retrieval scaled with the trial-by-trial vividness with which participants experienced their recollections. Therefore, the findings offer new insights into the integrative processes subserved by AnG and how its function may contribute to our subjective experience of remembering. Copyright © 2016 Bonnici, Richter, et al.
Data discretization for novel resource discovery in large medical data sets.
Benoît, G.; Andrews, J. E.
2000-01-01
This paper is motivated by the problems of dealing with large data sets in information retrieval. The authors suggest an information retrieval framework based on mathematical principles to organize and permit end-user manipulation of a retrieval set. By adjusting through the interface the weights and types of relationships between query and set members, it is possible to expose unanticipated, novel relationships between the query/document pair. The retrieval set as a whole is parsed into discrete concept-oriented subsets (based on within-set similarity measures) and displayed on screen as interactive "graphic nodes" in an information space, distributed at first based on the vector model (similarity measure of set to query). The result is a visualized map wherein it is possible to identify main concept regions and multiple sub-regions as dimensions of the same data. Users may examine the membership within sub-regions. Based on this framework, a data visualization user interface was designed to encourage users to work with the data on multiple levels to find novel relationships between the query and retrieval set members. Space constraints prohibit addressing all aspects of this project. PMID:11079845
Cortical reinstatement and the confidence and accuracy of source memory.
Thakral, Preston P; Wang, Tracy H; Rugg, Michael D
2015-04-01
Cortical reinstatement refers to the overlap between neural activity elicited during the encoding and the subsequent retrieval of an episode, and is held to reflect retrieved mnemonic content. Previous findings have demonstrated that reinstatement effects reflect the quality of retrieved episodic information as this is operationalized by the accuracy of source memory judgments. The present functional magnetic resonance imaging (fMRI) study investigated whether reinstatement-related activity also co-varies with the confidence of accurate source judgments. Participants studied pictures of objects along with their visual or spoken names. At test, they first discriminated between studied and unstudied pictures and then, for each picture judged as studied, they also judged whether it had been paired with a visual or auditory name, using a three-point confidence scale. Accuracy of source memory judgments- and hence the quality of the source-specifying information--was greater for high than for low confidence judgments. Modality-selective retrieval-related activity (reinstatement effects) also co-varied with the confidence of the corresponding source memory judgment. The findings indicate that the quality of the information supporting accurate judgments of source memory is indexed by the relative magnitude of content-selective, retrieval-related neural activity. Copyright © 2015 Elsevier Inc. All rights reserved.
Early Visual Word Processing Is Flexible: Evidence from Spatiotemporal Brain Dynamics.
Chen, Yuanyuan; Davis, Matthew H; Pulvermüller, Friedemann; Hauk, Olaf
2015-09-01
Visual word recognition is often described as automatic, but the functional locus of top-down effects is still a matter of debate. Do task demands modulate how information is retrieved, or only how it is used? We used EEG/MEG recordings to assess whether, when, and how task contexts modify early retrieval of specific psycholinguistic information in occipitotemporal cortex, an area likely to contribute to early stages of visual word processing. Using a parametric approach, we analyzed the spatiotemporal response patterns of occipitotemporal cortex for orthographic, lexical, and semantic variables in three psycholinguistic tasks: silent reading, lexical decision, and semantic decision. Task modulation of word frequency and imageability effects occurred simultaneously in ventral occipitotemporal regions-in the vicinity of the putative visual word form area-around 160 msec, following task effects on orthographic typicality around 100 msec. Frequency and typicality also produced task-independent effects in anterior temporal lobe regions after 200 msec. The early task modulation for several specific psycholinguistic variables indicates that occipitotemporal areas integrate perceptual input with prior knowledge in a task-dependent manner. Still, later task-independent effects in anterior temporal lobes suggest that word recognition eventually leads to retrieval of semantic information irrespective of task demands. We conclude that even a highly overlearned visual task like word recognition should be described as flexible rather than automatic.
[Clinical Neuropsychology of Dementia with Lewy Bodies].
Nagahama, Yasuhiro
2016-02-01
Dementia with Lewy bodies (DLB) shows lesser memory impairment and more severe visuospatial disability than Alzheimer disease (AD). Although deficits in both consolidation and retrieval underlie the memory impairment, retrieval deficit is predominant in DLB. Visuospatial dysfunctions in DLB are related to the impairments in both ventral and dorsal streams of higher visual information processing, and lower visual processing in V1/V2 may also be impaired. Attention and executive functions are more widely disturbed in DLB than in AD. Imitation of finger gestures is impaired more frequently in DLB than in other mild dementia, and provides additional information for diagnosis of mild dementia, especially for DLB. Pareidolia, which lies between hallucination and visual misperception, is found frequently in DLB, but its mechanism is still under investigation.
ERIC Educational Resources Information Center
Song, Yaxiao
2010-01-01
Video surrogates can help people quickly make sense of the content of a video before downloading or seeking more detailed information. Visual and audio features of a video are primary information carriers and might become important components of video retrieval and video sense-making. In the past decades, most research and development efforts on…
Transformed Neural Pattern Reinstatement during Episodic Memory Retrieval.
Xiao, Xiaoqian; Dong, Qi; Gao, Jiahong; Men, Weiwei; Poldrack, Russell A; Xue, Gui
2017-03-15
Contemporary models of episodic memory posit that remembering involves the reenactment of encoding processes. Although encoding-retrieval similarity has been consistently reported and linked to memory success, the nature of neural pattern reinstatement is poorly understood. Using high-resolution fMRI on human subjects, our results obtained clear evidence for item-specific pattern reinstatement in the frontoparietal cortex, even when the encoding-retrieval pairs shared no perceptual similarity. No item-specific pattern reinstatement was found in the ventral visual cortex. Importantly, the brain regions and voxels carrying item-specific representation differed significantly between encoding and retrieval, and the item specificity for encoding-retrieval similarity was smaller than that for encoding or retrieval, suggesting different nature of representations between encoding and retrieval. Moreover, cross-region representational similarity analysis suggests that the encoded representation in the ventral visual cortex was reinstated in the frontoparietal cortex during retrieval. Together, these results suggest that, in addition to reinstatement of the originally encoded pattern in the brain regions that perform encoding processes, retrieval may also involve the reinstatement of a transformed representation of the encoded information. These results emphasize the constructive nature of memory retrieval that helps to serve important adaptive functions. SIGNIFICANCE STATEMENT Episodic memory enables humans to vividly reexperience past events, yet how this is achieved at the neural level is barely understood. A long-standing hypothesis posits that memory retrieval involves the faithful reinstatement of encoding-related activity. We tested this hypothesis by comparing the neural representations during encoding and retrieval. We found strong pattern reinstatement in the frontoparietal cortex, but not in the ventral visual cortex, that represents visual details. Critically, even within the same brain regions, the nature of representation during retrieval was qualitatively different from that during encoding. These results suggest that memory retrieval is not a faithful replay of past event but rather involves additional constructive processes to serve adaptive functions. Copyright © 2017 the authors 0270-6474/17/372986-13$15.00/0.
Hayes, Scott M; Nadel, Lynn; Ryan, Lee
2007-01-01
Previous research has investigated intentional retrieval of contextual information and contextual influences on object identification and word recognition, yet few studies have investigated context effects in episodic memory for objects. To address this issue, unique objects embedded in a visually rich scene or on a white background were presented to participants. At test, objects were presented either in the original scene or on a white background. A series of behavioral studies with young adults demonstrated a context shift decrement (CSD)-decreased recognition performance when context is changed between encoding and retrieval. The CSD was not attenuated by encoding or retrieval manipulations, suggesting that binding of object and context may be automatic. A final experiment explored the neural correlates of the CSD, using functional Magnetic Resonance Imaging. Parahippocampal cortex (PHC) activation (right greater than left) during incidental encoding was associated with subsequent memory of objects in the context shift condition. Greater activity in right PHC was also observed during successful recognition of objects previously presented in a scene. Finally, a subset of regions activated during scene encoding, such as bilateral PHC, was reactivated when the object was presented on a white background at retrieval. Although participants were not required to intentionally retrieve contextual information, the results suggest that PHC may reinstate visual context to mediate successful episodic memory retrieval. The CSD is attributed to automatic and obligatory binding of object and context. The results suggest that PHC is important not only for processing of scene information, but also plays a role in successful episodic memory encoding and retrieval. These findings are consistent with the view that spatial information is stored in the hippocampal complex, one of the central tenets of Multiple Trace Theory. (c) 2007 Wiley-Liss, Inc.
Colouring the Gaps in Learning Design: Aesthetics and the Visual in Learning
ERIC Educational Resources Information Center
Carroll, Fiona; Kop, Rita
2016-01-01
The visual is a dominant mode of information retrieval and understanding however, the focus on the visual dimension of Technology Enhanced Learning (TEL) is still quite weak in relation to its predominant focus on usability. To accommodate the future needs of the visual learner, designers of e-learning environments should advance the current…
ERIC Educational Resources Information Center
Empfield, Chick O.; Moser, Gene W.
One of a series of investigations on the Project on an Information Memory Model, the purpose of this study was to determine the amount and kind of visual information processed and stored in the memory of children using different modalities of observation. Children, aged 5, 9 and 13 years, were randomly assigned to one of three treatment groups.…
Ajay, Dara; Gangwal, Rahul P; Sangamwar, Abhay T
2015-01-01
Intelligent Patent Analysis Tool (IPAT) is an online data retrieval tool, operated based on text mining algorithm to extract specific patent information in a predetermined pattern into an Excel sheet. The software is designed and developed to retrieve and analyze technology information from multiple patent documents and generate various patent landscape graphs and charts. The software is C# coded in visual studio 2010, which extracts the publicly available patent information from the web pages like Google Patent and simultaneously study the various technology trends based on user-defined parameters. In other words, IPAT combined with the manual categorization will act as an excellent technology assessment tool in competitive intelligence and due diligence for predicting the future R&D forecast.
Speed of feedforward and recurrent processing in multilayer networks of integrate-and-fire neurons.
Panzeri, S; Rolls, E T; Battaglia, F; Lavis, R
2001-11-01
The speed of processing in the visual cortical areas can be fast, with for example the latency of neuronal responses increasing by only approximately 10 ms per area in the ventral visual system sequence V1 to V2 to V4 to inferior temporal visual cortex. This has led to the suggestion that rapid visual processing can only be based on the feedforward connections between cortical areas. To test this idea, we investigated the dynamics of information retrieval in multiple layer networks using a four-stage feedforward network modelled with continuous dynamics with integrate-and-fire neurons, and associative synaptic connections between stages with a synaptic time constant of 10 ms. Through the implementation of continuous dynamics, we found latency differences in information retrieval of only 5 ms per layer when local excitation was absent and processing was purely feedforward. However, information latency differences increased significantly when non-associative local excitation was included. We also found that local recurrent excitation through associatively modified synapses can contribute significantly to processing in as little as 15 ms per layer, including the feedforward and local feedback processing. Moreover, and in contrast to purely feed-forward processing, the contribution of local recurrent feedback was useful and approximately this rapid even when retrieval was made difficult by noise. These findings suggest that cortical information processing can benefit from recurrent circuits when the allowed processing time per cortical area is at least 15 ms long.
Mobile medical image retrieval
NASA Astrophysics Data System (ADS)
Duc, Samuel; Depeursinge, Adrien; Eggel, Ivan; Müller, Henning
2011-03-01
Images are an integral part of medical practice for diagnosis, treatment planning and teaching. Image retrieval has gained in importance mainly as a research domain over the past 20 years. Both textual and visual retrieval of images are essential. In the process of mobile devices becoming reliable and having a functionality equaling that of formerly desktop clients, mobile computing has gained ground and many applications have been explored. This creates a new field of mobile information search & access and in this context images can play an important role as they often allow understanding complex scenarios much quicker and easier than free text. Mobile information retrieval in general has skyrocketed over the past year with many new applications and tools being developed and all sorts of interfaces being adapted to mobile clients. This article describes constraints of an information retrieval system including visual and textual information retrieval from the medical literature of BioMedCentral and of the RSNA journals Radiology and Radiographics. Solutions for mobile data access with an example on an iPhone in a web-based environment are presented as iPhones are frequently used and the operating system is bound to become the most frequent smartphone operating system in 2011. A web-based scenario was chosen to allow for a use by other smart phone platforms such as Android as well. Constraints of small screens and navigation with touch screens are taken into account in the development of the application. A hybrid choice had to be taken to allow for taking pictures with the cell phone camera and upload them for visual similarity search as most producers of smart phones block this functionality to web applications. Mobile information access and in particular access to images can be surprisingly efficient and effective on smaller screens. Images can be read on screen much faster and relevance of documents can be identified quickly through the use of images contained in the text. Problems with the many, often incompatible mobile platforms were discovered and are listed in the text. Mobile information access is a quickly growing domain and the constraints of mobile access also need to be taken into account for image retrieval. The demonstrated access to the medical literature is most relevant as the medical literature and their images are clearly the largest knowledge source in the medical field.
ERIC Educational Resources Information Center
Kensinger, Elizabeth A.; Schacter, Daniel L.
2007-01-01
Memories can be retrieved with varied amounts of visual detail, and the emotional content of information can influence the likelihood that visual detail is remembered. In the present fMRI experiment (conducted with 19 adults scanned using a 3T magnet), we examined the neural processes that correspond with recognition of the visual details of…
ERIC Educational Resources Information Center
Dwyer, Francis M.
1985-01-01
This study investigated effects of rehearsal strategies and immediate test formats on delayed retention and effectiveness of visualization on material acquisition and retrieval. Findings indicate different rehearsal methods have different effects in facilitating delayed retention. Information acquisition is facilitated by visualization, although…
Spatial Paradigm for Information Retrieval and Exploration
DOE Office of Scientific and Technical Information (OSTI.GOV)
The SPIRE system consists of software for visual analysis of primarily text based information sources. This technology enables the content analysis of text documents without reading all the documents. It employs several algorithms for text and word proximity analysis. It identifies the key themes within the text documents. From this analysis, it projects the results onto a visual spatial proximity display (Galaxies or Themescape) where items (documents and/or themes) visually close to each other are known to have content which is close to each other. Innovative interaction techniques then allow for dynamic visual analysis of large text based information spaces.
SPIRE1.03. Spatial Paradigm for Information Retrieval and Exploration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adams, K.J.; Bohn, S.; Crow, V.
The SPIRE system consists of software for visual analysis of primarily text based information sources. This technology enables the content analysis of text documents without reading all the documents. It employs several algorithms for text and word proximity analysis. It identifies the key themes within the text documents. From this analysis, it projects the results onto a visual spatial proximity display (Galaxies or Themescape) where items (documents and/or themes) visually close to each other are known to have content which is close to each other. Innovative interaction techniques then allow for dynamic visual analysis of large text based information spaces.
Parietal Activation During Retrieval of Abstract and Concrete Auditory Information
Klostermann, Ellen C.; Kane, Ari J.M.; Shimamura, Arthur P.
2008-01-01
Successful memory retrieval has been associated with a neural circuit that involves prefrontal, precuneus, and posterior parietal regions. Specifically, these regions are active during recognition memory tests when items correctly identified as “old” are compared with items correctly identified as “new.” Yet, as nearly all previous fMRI studies have used visual stimuli, it is unclear whether activations in posterior regions are specifically associated with memory retrieval or if they reflect visuospatial processing. We focus on the status of parietal activations during recognition performance by testing memory for abstract and concrete nouns presented in the auditory modality with eyes closed. Successful retrieval of both concrete and abstract words was associated with increased activation in left inferior parietal regions (BA 40), similar to those observed with visual stimuli. These results demonstrate that activations in the posterior parietal cortex during retrieval cannot be attributed to bottom-up visuospatial processes but instead have a more direct relationship to memory retrieval processes. PMID:18243736
NASA Astrophysics Data System (ADS)
Ehmann, Andreas F.; Downie, J. Stephen
2005-09-01
The objective of the International Music Information Retrieval Systems Evaluation Laboratory (IMIRSEL) project is the creation of a large, secure corpus of audio and symbolic music data accessible to the music information retrieval (MIR) community for the testing and evaluation of various MIR techniques. As part of the IMIRSEL project, a cross-platform JAVA based visual programming environment called Music to Knowledge (M2K) is being developed for a variety of music information retrieval related tasks. The primary objective of M2K is to supply the MIR community with a toolset that provides the ability to rapidly prototype algorithms, as well as foster the sharing of techniques within the MIR community through the use of a standardized set of tools. Due to the relatively large size of audio data and the computational costs associated with some digital signal processing and machine learning techniques, M2K is also designed to support distributed computing across computing clusters. In addition, facilities to allow the integration of non-JAVA based (e.g., C/C++, MATLAB, etc.) algorithms and programs are provided within M2K. [Work supported by the Andrew W. Mellon Foundation and NSF Grants No. IIS-0340597 and No. IIS-0327371.
NASA Astrophysics Data System (ADS)
Che, Chang; Yu, Xiaoyang; Sun, Xiaoming; Yu, Boyang
2017-12-01
In recent years, Scalable Vocabulary Tree (SVT) has been shown to be effective in image retrieval. However, for general images where the foreground is the object to be recognized while the background is cluttered, the performance of the current SVT framework is restricted. In this paper, a new image retrieval framework that incorporates a robust distance metric and information fusion is proposed, which improves the retrieval performance relative to the baseline SVT approach. First, the visual words that represent the background are diminished by using a robust Hausdorff distance between different images. Second, image matching results based on three image signature representations are fused, which enhances the retrieval precision. We conducted intensive experiments on small-scale to large-scale image datasets: Corel-9, Corel-48, and PKU-198, where the proposed Hausdorff metric and information fusion outperforms the state-of-the-art methods by about 13, 15, and 15%, respectively.
NASA Technical Reports Server (NTRS)
Dominick, Wayne D. (Editor); Chum, Frank Y.; Gallagher, Suzy; Granier, Martin; Hall, Philip P.; Moreau, Dennis R.; Triantafyllopoulos, Spiros
1985-01-01
This Working Paper Series entry represents the abstracts and visuals associated with presentations delivered by six USL NASA/RECON research team members at the above named conference. The presentations highlight various aspects of NASA contract activities pursued by the participants as they relate to individual research projects. The titles of the six presentations are as follows: (1) The Specification and Design of a Distributed Workstation; (2) An Innovative, Multidisciplinary Educational Program in Interactive Information Storage and Retrieval; (3) Critical Comparative Analysis of the Major Commercial IS and R Systems; (4) Design Criteria for a PC-Based Common User Interface to Remote Information Systems; (5) The Design of an Object-Oriented Graphics Interface; and (6) Knowledge-Based Information Retrieval: Techniques and Applications.
Content-based retrieval using MPEG-7 visual descriptor and hippocampal neural network
NASA Astrophysics Data System (ADS)
Kim, Young Ho; Joung, Lyang-Jae; Kang, Dae-Seong
2005-12-01
As development of digital technology, many kinds of multimedia data are used variously and requirements for effective use by user are increasing. In order to transfer information fast and precisely what user wants, effective retrieval method is required. As existing multimedia data are impossible to apply the MPEG-1, MPEG-2 and MPEG-4 technologies which are aimed at compression, store and transmission. So MPEG-7 is introduced as a new technology for effective management and retrieval for multimedia data. In this paper, we extract content-based features using color descriptor among the MPEG-7 standardization visual descriptor, and reduce feature data applying PCA(Principal Components Analysis) technique. We remodel the cerebral cortex and hippocampal neural networks as a principle of a human's brain and it can label the features of the image-data which are inputted according to the order of hippocampal neuron structure to reaction-pattern according to the adjustment of a good impression in Dentate gyrus region and remove the noise through the auto-associate- memory step in the CA3 region. In the CA1 region receiving the information of the CA3, it can make long-term or short-term memory learned by neuron. Hippocampal neural network makes neuron of the neural network separate and combine dynamically, expand the neuron attaching additional information using the synapse and add new features according to the situation by user's demand. When user is querying, it compares feature value stored in long-term memory first and it learns feature vector fast and construct optimized feature. So the speed of index and retrieval is fast. Also, it uses MPEG-7 standard visual descriptors as content-based feature value, it improves retrieval efficiency.
Intuitive color-based visualization of multimedia content as large graphs
NASA Astrophysics Data System (ADS)
Delest, Maylis; Don, Anthony; Benois-Pineau, Jenny
2004-06-01
Data visualization techniques are penetrating in various technological areas. In the field of multimedia such as information search and retrieval in multimedia archives, or digital media production and post-production, data visualization methodologies based on large graphs give an exciting alternative to conventional storyboard visualization. In this paper we develop a new approach to visualization of multimedia (video) documents based both on large graph clustering and preliminary video segmenting and indexing.
Photogrammetry for Archaeology: Collecting Pieces Together
NASA Astrophysics Data System (ADS)
Chibunichev, A. G.; Knyaz, V. A.; Zhuravlev, D. V.; Kurkov, V. M.
2018-05-01
The complexity of retrieving and understanding the archaeological data requires to apply different techniques, tools and sensors for information gathering, processing and documenting. Archaeological research now has the interdisciplinary nature involving technologies based on different physical principles for retrieving information about archaeological findings. The important part of archaeological data is visual and spatial information which allows reconstructing the appearance of the findings and relation between them. Photogrammetry has a great potential for accurate acquiring of spatial and visual data of different scale and resolution allowing to create archaeological documents of new type and quality. The aim of the presented study is to develop an approach for creating new forms of archaeological documents, a pipeline for their producing and collecting in one holistic model, describing an archaeological site. A set of techniques is developed for acquiring and integration of spatial and visual data of different level of details. The application of the developed techniques is demonstrated for documenting of Bosporus archaeological expedition of Russian State Historical Museum.
NASA Astrophysics Data System (ADS)
Wang, Chengbin; Ma, Xiaogang; Chen, Jianguo
2018-06-01
Initiatives of open data promote the online publication and sharing of large amounts of geologic data. How to retrieve information and discover knowledge from the big data is an ongoing challenge. In this paper, we developed an ontology-driven data integration and visualization pilot system for exploring information of regional geologic time, paleontology, and fundamental geology. The pilot system (http://www2.cs.uidaho.edu/%7Emax/gts/)
Adaptive Visualization for Focused Personalized Information Retrieval
ERIC Educational Resources Information Center
Ahn, Jae-wook
2010-01-01
The new trend on the Web has totally changed today's information access environment. The traditional information overload problem has evolved into the qualitative level beyond the quantitative growth. The mode of producing and consuming information is changing and we need a new paradigm for accessing information. Personalized search is one of…
Elman, Jeremy A; Cohn-Sheehy, Brendan I; Shimamura, Arthur P
2013-03-01
In fMRI analyses, the posterior parietal cortex (PPC) is particularly active during the successful retrieval of episodic memory. To delineate the neural correlates of episodic retrieval more succinctly, we compared retrieval of recently learned spatial locations (photographs of buildings) with retrieval of previously familiar locations (photographs of familiar campus buildings). Episodic retrieval of recently learned locations activated a circumscribed region within the ventral PPC (anterior angular gyrus and adjacent regions in the supramarginal gyrus) as well as medial PPC regions (posterior cingulated gyrus and posterior precuneus). Retrieval of familiar locations activated more posterior regions in the ventral PPC (posterior angular gyrus, LOC) and more anterior regions in the medial PPC (anterior precuneus and retrosplenial cortex). These dissociable effects define more precisely PPC regions involved in the retrieval of recent, contextually bound information as opposed to regions involved in other processes, such as visual imagery, scene reconstruction, and self-referential processing. Copyright © 2012 Elsevier Ltd. All rights reserved.
A New System To Support Knowledge Discovery: Telemakus.
ERIC Educational Resources Information Center
Revere, Debra; Fuller, Sherrilynne S.; Bugni, Paul F.; Martin, George M.
2003-01-01
The Telemakus System builds on the areas of concept representation, schema theory, and information visualization to enhance knowledge discovery from scientific literature. This article describes the underlying theories and an overview of a working implementation designed to enhance the knowledge discovery process through retrieval, visual and…
The Future of Access Technology for Blind and Visually Impaired People.
ERIC Educational Resources Information Center
Schreier, E. M.
1990-01-01
This article describes potential use of new technological products and services by blind/visually impaired people. Items discussed include computer input devices, public telephones, automatic teller machines, airline and rail arrival/departure displays, ticketing machines, information retrieval systems, order-entry terminals, optical character…
Semantics-driven modelling of user preferences for information retrieval in the biomedical domain.
Gladun, Anatoly; Rogushina, Julia; Valencia-García, Rafael; Béjar, Rodrigo Martínez
2013-03-01
A large amount of biomedical and genomic data are currently available on the Internet. However, data are distributed into heterogeneous biological information sources, with little or even no organization. Semantic technologies provide a consistent and reliable basis with which to confront the challenges involved in the organization, manipulation and visualization of data and knowledge. One of the knowledge representation techniques used in semantic processing is the ontology, which is commonly defined as a formal and explicit specification of a shared conceptualization of a domain of interest. The work presented here introduces a set of interoperable algorithms that can use domain and ontological information to improve information-retrieval processes. This work presents an ontology-based information-retrieval system for the biomedical domain. This system, with which some experiments have been carried out that are described in this paper, is based on the use of domain ontologies for the creation and normalization of lightweight ontologies that represent user preferences in a determined domain in order to improve information-retrieval processes.
VID-R and SCAN: Tools and Methods for the Automated Analysis of Visual Records.
ERIC Educational Resources Information Center
Ekman, Paul; And Others
The VID-R (Visual Information Display and Retrieval) system that enables computer-aided analysis of visual records is composed of a film-to-television chain, two videotape recorders with complete remote control of functions, a video-disc recorder, three high-resolution television monitors, a teletype, a PDP-8, a video and audio interface, three…
Toward semantic-based retrieval of visual information: a model-based approach
NASA Astrophysics Data System (ADS)
Park, Youngchoon; Golshani, Forouzan; Panchanathan, Sethuraman
2002-07-01
This paper center around the problem of automated visual content classification. To enable classification based image or visual object retrieval, we propose a new image representation scheme called visual context descriptor (VCD) that is a multidimensional vector in which each element represents the frequency of a unique visual property of an image or a region. VCD utilizes the predetermined quality dimensions (i.e., types of features and quantization level) and semantic model templates mined in priori. Not only observed visual cues, but also contextually relevant visual features are proportionally incorporated in VCD. Contextual relevance of a visual cue to a semantic class is determined by using correlation analysis of ground truth samples. Such co-occurrence analysis of visual cues requires transformation of a real-valued visual feature vector (e.g., color histogram, Gabor texture, etc.,) into a discrete event (e.g., terms in text). Good-feature to track, rule of thirds, iterative k-means clustering and TSVQ are involved in transformation of feature vectors into unified symbolic representations called visual terms. Similarity-based visual cue frequency estimation is also proposed and used for ensuring the correctness of model learning and matching since sparseness of sample data causes the unstable results of frequency estimation of visual cues. The proposed method naturally allows integration of heterogeneous visual or temporal or spatial cues in a single classification or matching framework, and can be easily integrated into a semantic knowledge base such as thesaurus, and ontology. Robust semantic visual model template creation and object based image retrieval are demonstrated based on the proposed content description scheme.
Content-based image retrieval by matching hierarchical attributed region adjacency graphs
NASA Astrophysics Data System (ADS)
Fischer, Benedikt; Thies, Christian J.; Guld, Mark O.; Lehmann, Thomas M.
2004-05-01
Content-based image retrieval requires a formal description of visual information. In medical applications, all relevant biological objects have to be represented by this description. Although color as the primary feature has proven successful in publicly available retrieval systems of general purpose, this description is not applicable to most medical images. Additionally, it has been shown that global features characterizing the whole image do not lead to acceptable results in the medical context or that they are only suitable for specific applications. For a general purpose content-based comparison of medical images, local, i.e. regional features that are collected on multiple scales must be used. A hierarchical attributed region adjacency graph (HARAG) provides such a representation and transfers image comparison to graph matching. However, building a HARAG from an image requires a restriction in size to be computationally feasible while at the same time all visually plausible information must be preserved. For this purpose, mechanisms for the reduction of the graph size are presented. Even with a reduced graph, the problem of graph matching remains NP-complete. In this paper, the Similarity Flooding approach and Hopfield-style neural networks are adapted from the graph matching community to the needs of HARAG comparison. Based on synthetic image material build from simple geometric objects, all visually similar regions were matched accordingly showing the framework's general applicability to content-based image retrieval of medical images.
Searching for Images: The Analysis of Users' Queries for Image Retrieval in American History.
ERIC Educational Resources Information Center
Choi, Youngok; Rasmussen, Edie M.
2003-01-01
Studied users' queries for visual information in American history to identify the image attributes important for retrieval and the characteristics of users' queries for digital images, based on queries from 38 faculty and graduate students. Results of pre- and post-test questionnaires and interviews suggest principle categories of search terms.…
An annotation system for 3D fluid flow visualization
NASA Technical Reports Server (NTRS)
Loughlin, Maria M.; Hughes, John F.
1995-01-01
Annotation is a key activity of data analysis. However, current systems for data analysis focus almost exclusively on visualization. We propose a system which integrates annotations into a visualization system. Annotations are embedded in 3D data space, using the Post-it metaphor. This embedding allows contextual-based information storage and retrieval, and facilitates information sharing in collaborative environments. We provide a traditional database filter and a Magic Lens filter to create specialized views of the data. The system has been customized for fluid flow applications, with features which allow users to store parameters of visualization tools and sketch 3D volumes.
The neurocognitive basis of borrowed context information.
O'Neill, Meagan; Diana, Rachel A
2017-06-01
Falsely remembered items can be accompanied by episodic context retrieval. This finding is difficult to explain because there is no episode that binds the remembered item to the experimenter-controlled context features. The current study examines the neural correlates of false context retrieval when the context features can be traced to encoding episodes of semantically-similar items. Our neuroimaging results support a "dissociated source" mechanism for context borrowing in false memory. We found that parahippocampal cortex (PHc) activation, thought to indicate context retrieval, was greater during trials that involved context borrowing (an incorrect, but plausible source decision) than during baseline correct context retrieval. In contrast, hippocampal activation, thought to indicate retrieval of an episodic binding, was stronger during correct source retrieval than during context borrowing. Vivid context retrieval during false recollection experiences was also indicated by increased activation in visual perceptual regions for context borrowing as compared to other incorrect source judgments. The pattern of findings suggests that context borrowing can arise when unusually strong activation of a semantically-related item's contextual features drives relatively weak retrieval of the associated episodic binding with failure to confirm the item information within that binding. This dissociated source retrieval mechanism suggests that context-driven episodic retrieval does not necessarily lead to retrieval of specific item details. That is, source information can be retrieved in the absence of item memory. Copyright © 2017 Elsevier Ltd. All rights reserved.
Artificial Intelligence Applications to Videodisc Technology
Vries, John K.; Banks, Gordon; McLinden, Sean; Moossy, John; Brown, Melanie
1985-01-01
Much of medical information is visual in nature. Since it is not easy to describe pictorial information in linguistic terms, it has been difficult to store and retrieve this type of information. Coupling videodisc technology with artificial intelligence programming techniques may provide a means for solving this problem.
Data augmentation-assisted deep learning of hand-drawn partially colored sketches for visual search
Muhammad, Khan; Baik, Sung Wook
2017-01-01
In recent years, image databases are growing at exponential rates, making their management, indexing, and retrieval, very challenging. Typical image retrieval systems rely on sample images as queries. However, in the absence of sample query images, hand-drawn sketches are also used. The recent adoption of touch screen input devices makes it very convenient to quickly draw shaded sketches of objects to be used for querying image databases. This paper presents a mechanism to provide access to visual information based on users’ hand-drawn partially colored sketches using touch screen devices. A key challenge for sketch-based image retrieval systems is to cope with the inherent ambiguity in sketches due to the lack of colors, textures, shading, and drawing imperfections. To cope with these issues, we propose to fine-tune a deep convolutional neural network (CNN) using augmented dataset to extract features from partially colored hand-drawn sketches for query specification in a sketch-based image retrieval framework. The large augmented dataset contains natural images, edge maps, hand-drawn sketches, de-colorized, and de-texturized images which allow CNN to effectively model visual contents presented to it in a variety of forms. The deep features extracted from CNN allow retrieval of images using both sketches and full color images as queries. We also evaluated the role of partial coloring or shading in sketches to improve the retrieval performance. The proposed method is tested on two large datasets for sketch recognition and sketch-based image retrieval and achieved better classification and retrieval performance than many existing methods. PMID:28859140
Shipstead, Zach; Engle, Randall W
2013-01-01
One approach to understanding working memory (WM) holds that individual differences in WM capacity arise from the amount of information a person can store in WM over short periods of time. This view is especially prevalent in WM research conducted with the visual arrays task. Within this tradition, many researchers have concluded that the average person can maintain approximately 4 items in WM. The present study challenges this interpretation by demonstrating that performance on the visual arrays task is subject to time-related factors that are associated with retrieval from long-term memory. Experiment 1 demonstrates that memory for an array does not decay as a product of absolute time, which is consistent with both maintenance- and retrieval-based explanations of visual arrays performance. Experiment 2 introduced a manipulation of temporal discriminability by varying the relative spacing of trials in time. We found that memory for a target array was significantly influenced by its temporal compression with, or isolation from, a preceding trial. Subsequent experiments extend these effects to sub-capacity set sizes and demonstrate that changes in the size of k are meaningful to prediction of performance on other measures of WM capacity as well as general fluid intelligence. We conclude that performance on the visual arrays task does not reflect a multi-item storage system but instead measures a person's ability to accurately retrieve information in the face of proactive interference.
Science information systems: Archive, access, and retrieval
NASA Technical Reports Server (NTRS)
Campbell, William J.
1991-01-01
The objective of this research is to develop technology for the automated characterization and interactive retrieval and visualization of very large, complex scientific data sets. Technologies will be developed for the following specific areas: (1) rapidly archiving data sets; (2) automatically characterizing and labeling data in near real-time; (3) providing users with the ability to browse contents of databases efficiently and effectively; (4) providing users with the ability to access and retrieve system independent data sets electronically; and (5) automatically alerting scientists to anomalies detected in data.
Breast Histopathological Image Retrieval Based on Latent Dirichlet Allocation.
Ma, Yibing; Jiang, Zhiguo; Zhang, Haopeng; Xie, Fengying; Zheng, Yushan; Shi, Huaqiang; Zhao, Yu
2017-07-01
In the field of pathology, whole slide image (WSI) has become the major carrier of visual and diagnostic information. Content-based image retrieval among WSIs can aid the diagnosis of an unknown pathological image by finding its similar regions in WSIs with diagnostic information. However, the huge size and complex content of WSI pose several challenges for retrieval. In this paper, we propose an unsupervised, accurate, and fast retrieval method for a breast histopathological image. Specifically, the method presents a local statistical feature of nuclei for morphology and distribution of nuclei, and employs the Gabor feature to describe the texture information. The latent Dirichlet allocation model is utilized for high-level semantic mining. Locality-sensitive hashing is used to speed up the search. Experiments on a WSI database with more than 8000 images from 15 types of breast histopathology demonstrate that our method achieves about 0.9 retrieval precision as well as promising efficiency. Based on the proposed framework, we are developing a search engine for an online digital slide browsing and retrieval platform, which can be applied in computer-aided diagnosis, pathology education, and WSI archiving and management.
Odors as effective retrieval cues for stressful episodes.
Wiemers, Uta S; Sauvage, Magdalena M; Wolf, Oliver T
2014-07-01
Olfactory information seems to play a special role in memory due to the fast and direct processing of olfactory information in limbic areas like the amygdala and the hippocampus. This has led to the assumption that odors can serve as effective retrieval cues for autobiographic memories, especially emotional memories. The current study sought to investigate whether an olfactory cue can serve as an effective retrieval cue for memories of a stressful episode. A total of 95 participants were exposed to a psychosocial stressor or a well matching but not stressful control condition. During both conditions were visual objects present, either bound to the situation (central objects) or not (peripheral objects). Additionally, an ambient odor was present during both conditions. The next day, participants engaged in an unexpected object recognition task either under the influence of the same odor as was present during encoding (congruent odor) or another odor (non-congruent odor). Results show that stressed participants show a better memory for all objects and especially for central visual objects if recognition took place under influence of the congruent odor. An olfactory cue thus indeed seems to be an effective retrieval cue for stressful memories. Copyright © 2013 Elsevier Inc. All rights reserved.
Pictorial Visual Rotation Ability of Engineering Design Graphics Students
ERIC Educational Resources Information Center
Ernst, Jeremy Vaughn; Lane, Diarmaid; Clark, Aaron C.
2015-01-01
The ability to rotate visual mental images is a complex cognitive skill. It requires the building of graphical libraries of information through short or long term memory systems and the subsequent retrieval and manipulation of these towards a specified goal. The development of mental rotation skill is of critical importance within engineering…
ERIC Educational Resources Information Center
Huettig, Falk; McQueen, James M.
2007-01-01
Experiments 1 and 2 examined the time-course of retrieval of phonological, visual-shape and semantic knowledge as Dutch participants listened to sentences and looked at displays of four pictures. Given a sentence with "beker," "beaker," for example, the display contained phonological (a beaver, "bever"), shape (a…
Shifting Visual Perspective During Retrieval Shapes Autobiographical Memories
St Jacques, Peggy L.; Szpunar, Karl K.; Schacter, Daniel L.
2016-01-01
The dynamic and flexible nature of memories is evident in our ability to adopt multiple visual perspectives. Although autobiographical memories are typically encoded from the visual perspective of our own eyes they can be retrieved from the perspective of an observer looking at our self. Here, we examined the neural mechanisms of shifting visual perspective during long-term memory retrieval and its influence on online and subsequent memories using functional magnetic resonance imaging (fMRI). Participants generated specific autobiographical memories from the last five years and rated their visual perspective. In a separate fMRI session, they were asked to retrieve the memories across three repetitions while maintaining the same visual perspective as their initial rating or by shifting to an alternative perspective. Visual perspective shifting during autobiographical memory retrieval was supported by a linear decrease in neural recruitment across repetitions in the posterior parietal cortices. Additional analyses revealed that the precuneus, in particular, contributed to both online and subsequent changes in the phenomenology of memories. Our findings show that flexibly shifting egocentric perspective during autobiographical memory retrieval is supported by the precuneus, and suggest that this manipulation of mental imagery during retrieval has consequences for how memories are retrieved and later remembered. PMID:27989780
Optically secured information retrieval using two authenticated phase-only masks.
Wang, Xiaogang; Chen, Wen; Mei, Shengtao; Chen, Xudong
2015-10-23
We propose an algorithm for jointly designing two phase-only masks (POMs) that allow for the encryption and noise-free retrieval of triple images. The images required for optical retrieval are first stored in quick-response (QR) codes for noise-free retrieval and flexible readout. Two sparse POMs are respectively calculated from two different images used as references for authentication based on modified Gerchberg-Saxton algorithm (GSA) and pixel extraction, and are then used as support constraints in a modified double-phase retrieval algorithm (MPRA), together with the above-mentioned QR codes. No visible information about the target images or the reference images can be obtained from each of these authenticated POMs. This approach allows users to authenticate the two POMs used for image reconstruction without visual observation of the reference images. It also allows user to friendly access and readout with mobile devices.
Optically secured information retrieval using two authenticated phase-only masks
Wang, Xiaogang; Chen, Wen; Mei, Shengtao; Chen, Xudong
2015-01-01
We propose an algorithm for jointly designing two phase-only masks (POMs) that allow for the encryption and noise-free retrieval of triple images. The images required for optical retrieval are first stored in quick-response (QR) codes for noise-free retrieval and flexible readout. Two sparse POMs are respectively calculated from two different images used as references for authentication based on modified Gerchberg-Saxton algorithm (GSA) and pixel extraction, and are then used as support constraints in a modified double-phase retrieval algorithm (MPRA), together with the above-mentioned QR codes. No visible information about the target images or the reference images can be obtained from each of these authenticated POMs. This approach allows users to authenticate the two POMs used for image reconstruction without visual observation of the reference images. It also allows user to friendly access and readout with mobile devices. PMID:26494213
Optically secured information retrieval using two authenticated phase-only masks
NASA Astrophysics Data System (ADS)
Wang, Xiaogang; Chen, Wen; Mei, Shengtao; Chen, Xudong
2015-10-01
We propose an algorithm for jointly designing two phase-only masks (POMs) that allow for the encryption and noise-free retrieval of triple images. The images required for optical retrieval are first stored in quick-response (QR) codes for noise-free retrieval and flexible readout. Two sparse POMs are respectively calculated from two different images used as references for authentication based on modified Gerchberg-Saxton algorithm (GSA) and pixel extraction, and are then used as support constraints in a modified double-phase retrieval algorithm (MPRA), together with the above-mentioned QR codes. No visible information about the target images or the reference images can be obtained from each of these authenticated POMs. This approach allows users to authenticate the two POMs used for image reconstruction without visual observation of the reference images. It also allows user to friendly access and readout with mobile devices.
Regional information guidance system based on hypermedia concept
NASA Astrophysics Data System (ADS)
Matoba, Hiroshi; Hara, Yoshinori; Kasahara, Yutako
1990-08-01
A regional information guidance system has been developed on an image workstation. Two main features of this system are hypermedia data structure and friendly visual interface realized by the full-color frame memory system. As the hypermedia data structure manages regional information such as maps, pictures and explanations of points of interest, users can retrieve those information one by one, next to next according to their interest change. For example, users can retrieve explanation of a picture through the link between pictures and text explanations. Users can also traverse from one document to another by using keywords as cross reference indices. The second feature is to utilize a full-color, high resolution and wide space frame memory for visual interface design. This frame memory system enables real-time operation of image data and natural scene representation. The system also provides half tone representing function which enables fade-in/out presentations. This fade-in/out functions used in displaying and erasing menu and image data, makes visual interface soft for human eyes. The system we have developed is a typical example of multimedia applications. We expect the image workstation will play an important role as a platform for multimedia applications.
Content-based TV sports video retrieval using multimodal analysis
NASA Astrophysics Data System (ADS)
Yu, Yiqing; Liu, Huayong; Wang, Hongbin; Zhou, Dongru
2003-09-01
In this paper, we propose content-based video retrieval, which is a kind of retrieval by its semantical contents. Because video data is composed of multimodal information streams such as video, auditory and textual streams, we describe a strategy of using multimodal analysis for automatic parsing sports video. The paper first defines the basic structure of sports video database system, and then introduces a new approach that integrates visual stream analysis, speech recognition, speech signal processing and text extraction to realize video retrieval. The experimental results for TV sports video of football games indicate that the multimodal analysis is effective for video retrieval by quickly browsing tree-like video clips or inputting keywords within predefined domain.
Guerin, Scott A.; Robbins, Clifford A.; Gilmore, Adrian W.; Schacter, Daniel L.
2012-01-01
SUMMARY The interaction between episodic retrieval and visual attention is relatively unexplored. Given that systems mediating attention and episodic memory appear to be segregated, and perhaps even in competition, it is unclear how visual attention is recruited during episodic retrieval. We investigated the recruitment of visual attention during the suppression of gist-based false recognition, the tendency to falsely recognize items that are similar to previously encountered items. Recruitment of visual attention was associated with activity in the dorsal attention network. The inferior parietal lobule, often implicated in episodic retrieval, tracked veridical retrieval of perceptual detail and showed reduced activity during the engagement of visual attention, consistent with a competitive relationship with the dorsal attention network. These findings suggest that the contribution of the parietal cortex to interactions between visual attention and episodic retrieval entails distinct systems that contribute to different components of the task while also suppressing each other. PMID:22998879
Kostopoulos, Penelope; Petrides, Michael
2016-02-16
There is evidence from the visual, verbal, and tactile memory domains that the midventrolateral prefrontal cortex plays a critical role in the top-down modulation of activity within posterior cortical areas for the selective retrieval of specific aspects of a memorized experience, a functional process often referred to as active controlled retrieval. In the present functional neuroimaging study, we explore the neural bases of active retrieval for auditory nonverbal information, about which almost nothing is known. Human participants were scanned with functional magnetic resonance imaging (fMRI) in a task in which they were presented with short melodies from different locations in a simulated virtual acoustic environment within the scanner and were then instructed to retrieve selectively either the particular melody presented or its location. There were significant activity increases specifically within the midventrolateral prefrontal region during the selective retrieval of nonverbal auditory information. During the selective retrieval of information from auditory memory, the right midventrolateral prefrontal region increased its interaction with the auditory temporal region and the inferior parietal lobule in the right hemisphere. These findings provide evidence that the midventrolateral prefrontal cortical region interacts with specific posterior cortical areas in the human cerebral cortex for the selective retrieval of object and location features of an auditory memory experience.
Yang, Liu; Jin, Rong; Mummert, Lily; Sukthankar, Rahul; Goode, Adam; Zheng, Bin; Hoi, Steven C H; Satyanarayanan, Mahadev
2010-01-01
Similarity measurement is a critical component in content-based image retrieval systems, and learning a good distance metric can significantly improve retrieval performance. However, despite extensive study, there are several major shortcomings with the existing approaches for distance metric learning that can significantly affect their application to medical image retrieval. In particular, "similarity" can mean very different things in image retrieval: resemblance in visual appearance (e.g., two images that look like one another) or similarity in semantic annotation (e.g., two images of tumors that look quite different yet are both malignant). Current approaches for distance metric learning typically address only one goal without consideration of the other. This is problematic for medical image retrieval where the goal is to assist doctors in decision making. In these applications, given a query image, the goal is to retrieve similar images from a reference library whose semantic annotations could provide the medical professional with greater insight into the possible interpretations of the query image. If the system were to retrieve images that did not look like the query, then users would be less likely to trust the system; on the other hand, retrieving images that appear superficially similar to the query but are semantically unrelated is undesirable because that could lead users toward an incorrect diagnosis. Hence, learning a distance metric that preserves both visual resemblance and semantic similarity is important. We emphasize that, although our study is focused on medical image retrieval, the problem addressed in this work is critical to many image retrieval systems. We present a boosting framework for distance metric learning that aims to preserve both visual and semantic similarities. The boosting framework first learns a binary representation using side information, in the form of labeled pairs, and then computes the distance as a weighted Hamming distance using the learned binary representation. A boosting algorithm is presented to efficiently learn the distance function. We evaluate the proposed algorithm on a mammographic image reference library with an Interactive Search-Assisted Decision Support (ISADS) system and on the medical image data set from ImageCLEF. Our results show that the boosting framework compares favorably to state-of-the-art approaches for distance metric learning in retrieval accuracy, with much lower computational cost. Additional evaluation with the COREL collection shows that our algorithm works well for regular image data sets.
Rosburg, Timm; Johansson, Mikael; Sprondel, Volker; Mecklinger, Axel
2014-11-18
Retrieval orientation refers to a pre-retrieval process and conceptualizes the specific form of processing that is applied to a retrieval cue. In the current event-related potential (ERP) study, we sought to find evidence for an involvement of the auditory cortex when subjects attempt to retrieve vocalized information, and hypothesized that adopting retrieval orientation would be beneficial for retrieval accuracy. During study, participants saw object words that they subsequently vocalized or visually imagined. At test, participants had to identify object names of one study condition as targets and to reject object names of the second condition together with new items. Target category switched after half of the test trials. Behaviorally, participants responded less accurately and more slowly to targets of the vocalize condition than to targets of the imagine condition. ERPs to new items varied at a single left electrode (T7) between 500 and 800ms, indicating a moderate retrieval orientation effect in the subject group as a whole. However, whereas the effect was strongly pronounced in participants with high retrieval accuracy, it was absent in participants with low retrieval accuracy. A current source density (CSD) mapping of the retrieval orientation effect indicated a source over left temporal regions. Independently from retrieval accuracy, the ERP retrieval orientation effect was surprisingly also modulated by test order. Findings are suggestive for an involvement of the auditory cortex in retrieval attempts of vocalized information and confirm that adopting retrieval orientation is potentially beneficial for retrieval accuracy. The effects of test order on retrieval-related processes might reflect a stronger focus on the newness of items in the more difficult test condition when participants started with this condition. Copyright © 2014 Elsevier Inc. All rights reserved.
Fahmy, Gamal; Black, John; Panchanathan, Sethuraman
2006-06-01
Today's multimedia applications demand sophisticated compression and classification techniques in order to store, transmit, and retrieve audio-visual information efficiently. Over the last decade, perceptually based image compression methods have been gaining importance. These methods take into account the abilities (and the limitations) of human visual perception (HVP) when performing compression. The upcoming MPEG 7 standard also addresses the need for succinct classification and indexing of visual content for efficient retrieval. However, there has been no research that has attempted to exploit the characteristics of the human visual system to perform both compression and classification jointly. One area of HVP that has unexplored potential for joint compression and classification is spatial frequency perception. Spatial frequency content that is perceived by humans can be characterized in terms of three parameters, which are: 1) magnitude; 2) phase; and 3) orientation. While the magnitude of spatial frequency content has been exploited in several existing image compression techniques, the novel contribution of this paper is its focus on the use of phase coherence for joint compression and classification in the wavelet domain. Specifically, this paper describes a human visual system-based method for measuring the degree to which an image contains coherent (perceptible) phase information, and then exploits that information to provide joint compression and classification. Simulation results that demonstrate the efficiency of this method are presented.
Ueno, Daisuke; Masumoto, Kouhei; Sutani, Kouichi; Iwaki, Sunao
2015-04-15
This study used magnetoencephalography (MEG) to examine the latency of modality-specific reactivation in the visual and auditory cortices during a recognition task to determine the effects of reactivation on episodic memory retrieval. Nine right-handed healthy young adults participated in the experiment. The experiment consisted of a word-encoding phase and two recognition phases. Three encoding conditions were included: encoding words alone (word-only) and encoding words presented with either related pictures (visual) or related sounds (auditory). The recognition task was conducted in the MEG scanner 15 min after the completion of the encoding phase. After the recognition test, a source-recognition task was given, in which participants were required to choose whether each recognition word was not presented or was presented with which information during the encoding phase. Word recognition in the auditory condition was higher than that in the word-only condition. Confidence-of-recognition scores (d') and the source-recognition test showed superior performance in both the visual and the auditory conditions compared with the word-only condition. An equivalent current dipoles analysis of MEG data indicated that higher equivalent current dipole amplitudes in the right fusiform gyrus occurred during the visual condition and in the superior temporal auditory cortices during the auditory condition, both 450-550 ms after onset of the recognition stimuli. Results suggest that reactivation of visual and auditory brain regions during recognition binds language with modality-specific information and that reactivation enhances confidence in one's recognition performance.
Memorable Audiovisual Narratives Synchronize Sensory and Supramodal Neural Responses
2016-01-01
Abstract Our brains integrate information across sensory modalities to generate perceptual experiences and form memories. However, it is difficult to determine the conditions under which multisensory stimulation will benefit or hinder the retrieval of everyday experiences. We hypothesized that the determining factor is the reliability of information processing during stimulus presentation, which can be measured through intersubject correlation of stimulus-evoked activity. We therefore presented biographical auditory narratives and visual animations to 72 human subjects visually, auditorily, or combined, while neural activity was recorded using electroencephalography. Memory for the narrated information, contained in the auditory stream, was tested 3 weeks later. While the visual stimulus alone led to no meaningful retrieval, this related stimulus improved memory when it was combined with the story, even when it was temporally incongruent with the audio. Further, individuals with better subsequent memory elicited neural responses during encoding that were more correlated with their peers. Surprisingly, portions of this predictive synchronized activity were present regardless of the sensory modality of the stimulus. These data suggest that the strength of sensory and supramodal activity is predictive of memory performance after 3 weeks, and that neural synchrony may explain the mnemonic benefit of the functionally uninformative visual context observed for these real-world stimuli. PMID:27844062
Luo, Jake; Chen, Weiheng; Wu, Min; Weng, Chunhua
2018-01-01
Background Prior studies of clinical trial planning indicate that it is crucial to search and screen recruitment sites before starting to enroll participants. However, currently there is no systematic method developed to support clinical investigators to search candidate recruitment sites according to their interested clinical trial factors. Objective In this study, we aim at developing a new approach to integrating the location data of over one million heterogeneous recruitment sites that are stored in clinical trial documents. The integrated recruitment location data can be searched and visualized using a map-based information retrieval method. The method enables systematic search and analysis of recruitment sites across a large amount of clinical trials. Methods The location data of more than 1.4 million recruitment sites of over 183,000 clinical trials was normalized and integrated using a geocoding method. The integrated data can be used to support geographic information retrieval of recruitment sites. Additionally, the information of over 6000 clinical trial target disease conditions and close to 4000 interventions was also integrated into the system and linked to the recruitment locations. Such data integration enabled the construction of a novel map-based query system. The system will allow clinical investigators to search and visualize candidate recruitment sites for clinical trials based on target conditions and interventions. Results The evaluation results showed that the coverage of the geographic location mapping for the 1.4 million recruitment sites was 99.8%. The evaluation of 200 randomly retrieved recruitment sites showed that the correctness of geographic information mapping was 96.5%. The recruitment intensities of the top 30 countries were also retrieved and analyzed. The data analysis results indicated that the recruitment intensity varied significantly across different countries and geographic areas. Conclusion This study contributed a new data processing framework to extract and integrate the location data of heterogeneous recruitment sites from clinical trial documents. The developed system can support effective retrieval and analysis of potential recruitment sites using target clinical trial factors. PMID:29132636
Luo, Jake; Chen, Weiheng; Wu, Min; Weng, Chunhua
2017-12-01
Prior studies of clinical trial planning indicate that it is crucial to search and screen recruitment sites before starting to enroll participants. However, currently there is no systematic method developed to support clinical investigators to search candidate recruitment sites according to their interested clinical trial factors. In this study, we aim at developing a new approach to integrating the location data of over one million heterogeneous recruitment sites that are stored in clinical trial documents. The integrated recruitment location data can be searched and visualized using a map-based information retrieval method. The method enables systematic search and analysis of recruitment sites across a large amount of clinical trials. The location data of more than 1.4 million recruitment sites of over 183,000 clinical trials was normalized and integrated using a geocoding method. The integrated data can be used to support geographic information retrieval of recruitment sites. Additionally, the information of over 6000 clinical trial target disease conditions and close to 4000 interventions was also integrated into the system and linked to the recruitment locations. Such data integration enabled the construction of a novel map-based query system. The system will allow clinical investigators to search and visualize candidate recruitment sites for clinical trials based on target conditions and interventions. The evaluation results showed that the coverage of the geographic location mapping for the 1.4 million recruitment sites was 99.8%. The evaluation of 200 randomly retrieved recruitment sites showed that the correctness of geographic information mapping was 96.5%. The recruitment intensities of the top 30 countries were also retrieved and analyzed. The data analysis results indicated that the recruitment intensity varied significantly across different countries and geographic areas. This study contributed a new data processing framework to extract and integrate the location data of heterogeneous recruitment sites from clinical trial documents. The developed system can support effective retrieval and analysis of potential recruitment sites using target clinical trial factors. Copyright © 2017 Elsevier B.V. All rights reserved.
Improving retention in alcoholic Korsakoff patients.
Cermak, L S
1980-01-01
Use of visual images facilitated the storage and retrieval of verbal information, use of verbal cues facilitated the retention of nonverbal materials, and semantic analysis enhanced recognition of verbal material by alcoholic Korsakoff patients.
Structuring Legacy Pathology Reports by openEHR Archetypes to Enable Semantic Querying.
Kropf, Stefan; Krücken, Peter; Mueller, Wolf; Denecke, Kerstin
2017-05-18
Clinical information is often stored as free text, e.g. in discharge summaries or pathology reports. These documents are semi-structured using section headers, numbered lists, items and classification strings. However, it is still challenging to retrieve relevant documents since keyword searches applied on complete unstructured documents result in many false positive retrieval results. We are concentrating on the processing of pathology reports as an example for unstructured clinical documents. The objective is to transform reports semi-automatically into an information structure that enables an improved access and retrieval of relevant data. The data is expected to be stored in a standardized, structured way to make it accessible for queries that are applied to specific sections of a document (section-sensitive queries) and for information reuse. Our processing pipeline comprises information modelling, section boundary detection and section-sensitive queries. For enabling a focused search in unstructured data, documents are automatically structured and transformed into a patient information model specified through openEHR archetypes. The resulting XML-based pathology electronic health records (PEHRs) are queried by XQuery and visualized by XSLT in HTML. Pathology reports (PRs) can be reliably structured into sections by a keyword-based approach. The information modelling using openEHR allows saving time in the modelling process since many archetypes can be reused. The resulting standardized, structured PEHRs allow accessing relevant data by retrieving data matching user queries. Mapping unstructured reports into a standardized information model is a practical solution for a better access to data. Archetype-based XML enables section-sensitive retrieval and visualisation by well-established XML techniques. Focussing the retrieval to particular sections has the potential of saving retrieval time and improving the accuracy of the retrieval.
Integrated approach to multimodal media content analysis
NASA Astrophysics Data System (ADS)
Zhang, Tong; Kuo, C.-C. Jay
1999-12-01
In this work, we present a system for the automatic segmentation, indexing and retrieval of audiovisual data based on the combination of audio, visual and textural content analysis. The video stream is demultiplexed into audio, image and caption components. Then, a semantic segmentation of the audio signal based on audio content analysis is conducted, and each segment is indexed as one of the basic audio types. The image sequence is segmented into shots based on visual information analysis, and keyframes are extracted from each shot. Meanwhile, keywords are detected from the closed caption. Index tables are designed for both linear and non-linear access to the video. It is shown by experiments that the proposed methods for multimodal media content analysis are effective. And that the integrated framework achieves satisfactory results for video information filtering and retrieval.
NASA Astrophysics Data System (ADS)
Chen, Andrew A.; Meng, Frank; Morioka, Craig A.; Churchill, Bernard M.; Kangarloo, Hooshang
2005-04-01
Managing pediatric patients with neurogenic bladder (NGB) involves regular laboratory, imaging, and physiologic testing. Using input from domain experts and current literature, we identified specific data points from these tests to develop the concept of an electronic disease vector for NGB. An information extraction engine was used to extract the desired data elements from free-text and semi-structured documents retrieved from the patient"s medical record. Finally, a Java-based presentation engine created graphical visualizations of the extracted data. After precision, recall, and timing evaluation, we conclude that these tools may enable clinically useful, automatically generated, and diagnosis-specific visualizations of patient data, potentially improving compliance and ultimately, outcomes.
An information-processing model of three cortical regions: evidence in episodic memory retrieval.
Sohn, Myeong-Ho; Goode, Adam; Stenger, V Andrew; Jung, Kwan-Jin; Carter, Cameron S; Anderson, John R
2005-03-01
ACT-R (Anderson, J.R., et al., 2003. An information-processing model of the BOLD response in symbol manipulation tasks. Psychon. Bull. Rev. 10, 241-261) relates the inferior dorso-lateral prefrontal cortex to a retrieval buffer that holds information retrieved from memory and the posterior parietal cortex to an imaginal buffer that holds problem representations. Because the number of changes in a problem representation is not necessarily correlated with retrieval difficulties, it is possible to dissociate prefrontal-parietal activations. In two fMRI experiments, we examined this dissociation using the fan effect paradigm. Experiment 1 compared a recognition task, in which representation requirement remains the same regardless of retrieval difficulty, with a recall task, in which both representation and retrieval loads increase with retrieval difficulty. In the recognition task, the prefrontal activation revealed a fan effect but not the parietal activation. In the recall task, both regions revealed fan effects. In Experiment 2, we compared visually presented stimuli and aurally presented stimuli using the recognition task. While only the prefrontal region revealed the fan effect, the activation patterns in the prefrontal and the parietal region did not differ by stimulus presentation modality. In general, these results provide support for the prefrontal-parietal dissociation in terms of retrieval and representation and the modality-independent nature of the information processed by these regions. Using ACT-R, we also provide computational models that explain patterns of fMRI responses in these two areas during recognition and recall.
A new pattern associative memory model for image recognition based on Hebb rules and dot product
NASA Astrophysics Data System (ADS)
Gao, Mingyue; Deng, Limiao; Wang, Yanjiang
2018-04-01
A great number of associative memory models have been proposed to realize information storage and retrieval inspired by human brain in the last few years. However, there is still much room for improvement for those models. In this paper, we extend a binary pattern associative memory model to accomplish real-world image recognition. The learning process is based on the fundamental Hebb rules and the retrieval is implemented by a normalized dot product operation. Our proposed model can not only fulfill rapid memory storage and retrieval for visual information but also have the ability on incremental learning without destroying the previous learned information. Experimental results demonstrate that our model outperforms the existing Self-Organizing Incremental Neural Network (SOINN) and Back Propagation Neuron Network (BPNN) on recognition accuracy and time efficiency.
Indexing the medical open access literature for textual and content-based visual retrieval.
Eggel, Ivan; Müller, Henning
2010-01-01
Over the past few years an increasing amount of scientific journals have been created in an open access format. Particularly in the medical field the number of openly accessible journals is enormous making a wide body of knowledge available for analysis and retrieval. Part of the trend towards open access publications can be linked to funding bodies such as the NIH1 (National Institutes of Health) and the Swiss National Science Foundation (SNF2) requiring funded projects to make all articles of funded research available publicly. This article describes an approach to make part of the knowledge of open access journals available for retrieval including the textual information but also the images contained in the articles. For this goal all articles of 24 journals related to medical informatics and medical imaging were crawled from the web pages of BioMed Central. Text and images of the PDF (Portable Document Format) files were indexed separately and a web-based retrieval interface allows for searching via keyword queries or by visual similarity queries. Starting point for a visual similarity query can be an image on the local hard disk that is uploaded or any image found via the textual search. Search for similar documents is also possible.
Visual long-term memory has the same limit on fidelity as visual working memory.
Brady, Timothy F; Konkle, Talia; Gill, Jonathan; Oliva, Aude; Alvarez, George A
2013-06-01
Visual long-term memory can store thousands of objects with surprising visual detail, but just how detailed are these representations, and how can one quantify this fidelity? Using the property of color as a case study, we estimated the precision of visual information in long-term memory, and compared this with the precision of the same information in working memory. Observers were shown real-world objects in random colors and were asked to recall the colors after a delay. We quantified two parameters of performance: the variability of internal representations of color (fidelity) and the probability of forgetting an object's color altogether. Surprisingly, the fidelity of color information in long-term memory was comparable to the asymptotic precision of working memory. These results suggest that long-term memory and working memory may be constrained by a common limit, such as a bound on the fidelity required to retrieve a memory representation.
NASA Astrophysics Data System (ADS)
Overoye, D.; Lewis, C.; Butler, D. M.; Andersen, T. J.
2016-12-01
The Global Learning and Observations to Benefit the Environment (GLOBE) Program is a worldwide hands-on, primary and secondary school-based science and education program founded on Earth Day 1995. Implemented in 117 countries, GLOBE promotes the teaching and learning of science, supporting students, teachers and scientists worldwide to collaborate with each other on inquiry-based investigations of the Earth system. The GLOBE Data Information System (DIS) currently supports users with the ability to enter data from over 50 different science protocols. GLOBE's Data Access and Visualization tools have been developed to accommodate the need to display and retrieve data from this large number of protocols. The community of users is also diverse, including NASA scientists, citizen scientists and grade school students. The challenge for GLOBE is to meet the needs from this diverse set of users with protocol specific displays that are simple enough for a GLOBE school to use, but also provide enough features for a NASA Scientist to retrieve data sets they are interested in. During the last 3 years, the GLOBE visualization system has evolved to meet the needs of these various users, leveraging user feedback and technological advances. Further refinements and enhancements continue. In this session we review the design and capabilities of the GLOBE visualization and data retrieval tool set, discuss the evolution of these tools, and discuss coming directions.
NASA Astrophysics Data System (ADS)
QingJie, Wei; WenBin, Wang
2017-06-01
In this paper, the image retrieval using deep convolutional neural network combined with regularization and PRelu activation function is studied, and improves image retrieval accuracy. Deep convolutional neural network can not only simulate the process of human brain to receive and transmit information, but also contains a convolution operation, which is very suitable for processing images. Using deep convolutional neural network is better than direct extraction of image visual features for image retrieval. However, the structure of deep convolutional neural network is complex, and it is easy to over-fitting and reduces the accuracy of image retrieval. In this paper, we combine L1 regularization and PRelu activation function to construct a deep convolutional neural network to prevent over-fitting of the network and improve the accuracy of image retrieval
Patient Safety—Incorporating Drawing Software into Root Cause Analysis Software
Williams, Linda; Grayson, Diana; Gosbee, John
2001-01-01
Drawing software from Lassalle Technologies1 (France) designed for Visual Basic is the tool we used to standardize the creation, storage, and retrieval of flow diagrams containing information about adverse events and close calls.
Patient Safety—Incorporating Drawing Software into Root Cause Analysis Software
Williams, Linda; Grayson, Diana; Gosbee, John
2002-01-01
Drawing software from Lassalle Technologies1 (France) designed for Visual Basic is the tool we used to standardize the creation, storage, and retrieval of flow diagrams containing information about adverse events and close calls.
Guillaume, Fabrice; Etienne, Yann
2015-03-01
Using two exclusion tasks, the present study examined how the ERP correlates of face recognition are affected by the nature of the information to be retrieved. Intrinsic (facial expression) and extrinsic (background scene) visual information were paired with face identity and constituted the exclusion criterion at test time. Although perceptual information had to be taken into account in both situations, the FN400 old-new effect was observed only for old target faces on the expression-exclusion task, whereas it was found for both old target and old non-target faces in the background-exclusion situation. These results reveal that the FN400, which is generally interpreted as a correlate of familiarity, was modulated by the retrieval of intra-item and intrinsic face information, but not by the retrieval of extrinsic information. The observed effects on the FN400 depended on the nature of the information to be retrieved and its relationship (unitization) to the recognition target. On the other hand, the parietal old-new effect (generally described as an ERP correlate of recollection) reflected the retrieval of both types of contextual features equivalently. The current findings are discussed in relation to recent controversies about the nature of the recognition processes reflected by the ERP correlates of face recognition. Copyright © 2015 Elsevier B.V. All rights reserved.
Managing Data in a GIS Environment
NASA Technical Reports Server (NTRS)
Beltran, Maria; Yiasemis, Haris
1997-01-01
A Geographic Information System (GIS) is a computer-based system that enables capture, modeling, manipulation, retrieval, analysis and presentation of geographically referenced data. A GIS operates in a dynamic environment of spatial and temporal information. This information is held in a database like any other information system, but performance is more of an issue for a geographic database than a traditional database due to the nature of the data. What distinguishes a GIS from other information systems is the spatial and temporal dimensions of the data and the volume of data (several gigabytes). Most traditional information systems are usually based around tables and textual reports, whereas GIS requires the use of cartographic forms and other visualization techniques. Much of the data can be represented using computer graphics, but a GIS is not a graphics database. A graphical system is concerned with the manipulation and presentation of graphical objects whereas a GIS handles geographic objects that have not only spatial dimensions but non-visual, i e., attribute and components. Furthermore, the nature of the data on which a GIS operates makes the traditional relational database approach inadequate for retrieving data and answering queries that reference spatial data. The purpose of this paper is to describe the efficiency issues behind storage and retrieval of data within a GIS database. Section 2 gives a general background on GIS, and describes the issues involved in custom vs. commercial and hybrid vs. integrated geographic information systems. Section 3 describes the efficiency issues concerning the management of data within a GIS environment. The paper ends with a summary of the main concerns of this paper.
Morgan, Erin E.; Woods, Steven Paul; Poquette, Amelia J.; Vigil, Ofilio; Heaton, Robert K.; Grant, Igor
2012-01-01
Objective Chronic use of methamphetamine (MA) has moderate effects on neurocognitive functions associated with frontal systems, including the executive aspects of verbal episodic memory. Extending this literature, the current study examined the effects of MA on visual episodic memory with the hypothesis that a profile of deficient strategic encoding and retrieval processes would be revealed for visuospatial information (i.e., simple geometric designs), including possible differential effects on source versus item recall. Method The sample comprised 114 MA-dependent (MA+) and 110 demographically-matched MA-nondependent comparison participants (MA−) who completed the Brief Visuospatial Memory Test – Revised (BVMT-R), which was scored for standard learning and memory indices, as well as novel item (i.e., figure) and source (i.e., location) memory indices. Results Results revealed a profile of impaired immediate and delayed free recall (p < .05) in the context of preserved learning slope, retention, and recognition discriminability in the MA+ group. The MA+ group also performed more poorly than MA− participants on Item visual memory (p < .05) but not Source visual memory (p > .05), and no group by task-type interaction was observed (p > .05). Item visual memory demonstrated significant associations with executive dysfunction, deficits in working memory, and shorter length of abstinence from MA use (p < 0.05). Conclusions These visual memory findings are commensurate with studies reporting deficient strategic verbal encoding and retrieval in MA users that are posited to reflect the vulnerability of frontostriatal circuits to the neurotoxic effects of MA. Potential clinical implications of these visual memory deficits are discussed. PMID:22311530
A cloud-based framework for large-scale traditional Chinese medical record retrieval.
Liu, Lijun; Liu, Li; Fu, Xiaodong; Huang, Qingsong; Zhang, Xianwen; Zhang, Yin
2018-01-01
Electronic medical records are increasingly common in medical practice. The secondary use of medical records has become increasingly important. It relies on the ability to retrieve the complete information about desired patient populations. How to effectively and accurately retrieve relevant medical records from large- scale medical big data is becoming a big challenge. Therefore, we propose an efficient and robust framework based on cloud for large-scale Traditional Chinese Medical Records (TCMRs) retrieval. We propose a parallel index building method and build a distributed search cluster, the former is used to improve the performance of index building, and the latter is used to provide high concurrent online TCMRs retrieval. Then, a real-time multi-indexing model is proposed to ensure the latest relevant TCMRs are indexed and retrieved in real-time, and a semantics-based query expansion method and a multi- factor ranking model are proposed to improve retrieval quality. Third, we implement a template-based visualization method for displaying medical reports. The proposed parallel indexing method and distributed search cluster can improve the performance of index building and provide high concurrent online TCMRs retrieval. The multi-indexing model can ensure the latest relevant TCMRs are indexed and retrieved in real-time. The semantics expansion method and the multi-factor ranking model can enhance retrieval quality. The template-based visualization method can enhance the availability and universality, where the medical reports are displayed via friendly web interface. In conclusion, compared with the current medical record retrieval systems, our system provides some advantages that are useful in improving the secondary use of large-scale traditional Chinese medical records in cloud environment. The proposed system is more easily integrated with existing clinical systems and be used in various scenarios. Copyright © 2017. Published by Elsevier Inc.
Salter, Phia S; Kelley, Nicholas J; Molina, Ludwin E; Thai, Luyen T
2017-09-01
Photographs provide critical retrieval cues for personal remembering, but few studies have considered this phenomenon at the collective level. In this research, we examined the psychological consequences of visual attention to the presence (or absence) of racially charged retrieval cues within American racial segregation photographs. We hypothesised that attention to racial retrieval cues embedded in historical photographs would increase social justice concept accessibility. In Study 1, we recorded gaze patterns with an eye-tracker among participants viewing images that contained racial retrieval cues or were digitally manipulated to remove them. In Study 2, we manipulated participants' gaze behaviour by either directing visual attention toward racial retrieval cues, away from racial retrieval cues, or directing attention within photographs where racial retrieval cues were missing. Across Studies 1 and 2, visual attention to racial retrieval cues in photographs documenting historical segregation predicted social justice concept accessibility.
Cross-Modal Retrieval With CNN Visual Features: A New Baseline.
Wei, Yunchao; Zhao, Yao; Lu, Canyi; Wei, Shikui; Liu, Luoqi; Zhu, Zhenfeng; Yan, Shuicheng
2017-02-01
Recently, convolutional neural network (CNN) visual features have demonstrated their powerful ability as a universal representation for various recognition tasks. In this paper, cross-modal retrieval with CNN visual features is implemented with several classic methods. Specifically, off-the-shelf CNN visual features are extracted from the CNN model, which is pretrained on ImageNet with more than one million images from 1000 object categories, as a generic image representation to tackle cross-modal retrieval. To further enhance the representational ability of CNN visual features, based on the pretrained CNN model on ImageNet, a fine-tuning step is performed by using the open source Caffe CNN library for each target data set. Besides, we propose a deep semantic matching method to address the cross-modal retrieval problem with respect to samples which are annotated with one or multiple labels. Extensive experiments on five popular publicly available data sets well demonstrate the superiority of CNN visual features for cross-modal retrieval.
Using the Saccharomyces Genome Database (SGD) for analysis of genomic information
Skrzypek, Marek S.; Hirschman, Jodi
2011-01-01
Analysis of genomic data requires access to software tools that place the sequence-derived information in the context of biology. The Saccharomyces Genome Database (SGD) integrates functional information about budding yeast genes and their products with a set of analysis tools that facilitate exploring their biological details. This unit describes how the various types of functional data available at SGD can be searched, retrieved, and analyzed. Starting with the guided tour of the SGD Home page and Locus Summary page, this unit highlights how to retrieve data using YeastMine, how to visualize genomic information with GBrowse, how to explore gene expression patterns with SPELL, and how to use Gene Ontology tools to characterize large-scale datasets. PMID:21901739
Explicit awareness supports conditional visual search in the retrieval guidance paradigm.
Buttaccio, Daniel R; Lange, Nicholas D; Hahn, Sowon; Thomas, Rick P
2014-01-01
In four experiments we explored whether participants would be able to use probabilistic prompts to simplify perceptually demanding visual search in a task we call the retrieval guidance paradigm. On each trial a memory prompt appeared prior to (and during) the search task and the diagnosticity of the prompt(s) was manipulated to provide complete, partial, or non-diagnostic information regarding the target's color on each trial (Experiments 1-3). In Experiment 1 we found that the more diagnostic prompts was associated with faster visual search performance. However, similar visual search behavior was observed in Experiment 2 when the diagnosticity of the prompts was eliminated, suggesting that participants in Experiment 1 were merely relying on base rate information to guide search and were not utilizing the prompts. In Experiment 3 participants were informed of the relationship between the prompts and the color of the target and this was associated with faster search performance relative to Experiment 1, suggesting that the participants were using the prompts to guide search. Additionally, in Experiment 3 a knowledge test was implemented and performance in this task was associated with qualitative differences in search behavior such that participants that were able to name the color(s) most associated with the prompts were faster to find the target than participants who were unable to do so. However, in Experiments 1-3 diagnosticity of the memory prompt was manipulated via base rate information, making it possible that participants were merely relying on base rate information to inform search in Experiment 3. In Experiment 4 we manipulated diagnosticity of the prompts without manipulating base rate information and found a similar pattern of results as Experiment 3. Together, the results emphasize the importance of base rate and diagnosticity information in visual search behavior. In the General discussion section we explore how a recent computational model of hypothesis generation (HyGene; Thomas, Dougherty, Sprenger, & Harbison, 2008), linking attention with long-term and working memory, accounts for the present results and provides a useful framework of cued recall visual search. Copyright © 2013 Elsevier B.V. All rights reserved.
Tourassi, Georgia D; Harrawood, Brian; Singh, Swatee; Lo, Joseph Y; Floyd, Carey E
2007-01-01
The purpose of this study was to evaluate image similarity measures employed in an information-theoretic computer-assisted detection (IT-CAD) scheme. The scheme was developed for content-based retrieval and detection of masses in screening mammograms. The study is aimed toward an interactive clinical paradigm where physicians query the proposed IT-CAD scheme on mammographic locations that are either visually suspicious or indicated as suspicious by other cuing CAD systems. The IT-CAD scheme provides an evidence-based, second opinion for query mammographic locations using a knowledge database of mass and normal cases. In this study, eight entropy-based similarity measures were compared with respect to retrieval precision and detection accuracy using a database of 1820 mammographic regions of interest. The IT-CAD scheme was then validated on a separate database for false positive reduction of progressively more challenging visual cues generated by an existing, in-house mass detection system. The study showed that the image similarity measures fall into one of two categories; one category is better suited to the retrieval of semantically similar cases while the second is more effective with knowledge-based decisions regarding the presence of a true mass in the query location. In addition, the IT-CAD scheme yielded a substantial reduction in false-positive detections while maintaining high detection rate for malignant masses.
An interference model of visual working memory.
Oberauer, Klaus; Lin, Hsuan-Yu
2017-01-01
The article introduces an interference model of working memory for information in a continuous similarity space, such as the features of visual objects. The model incorporates the following assumptions: (a) Probability of retrieval is determined by the relative activation of each retrieval candidate at the time of retrieval; (b) activation comes from 3 sources in memory: cue-based retrieval using context cues, context-independent memory for relevant contents, and noise; (c) 1 memory object and its context can be held in the focus of attention, where it is represented with higher precision, and partly shielded against interference. The model was fit to data from 4 continuous-reproduction experiments testing working memory for colors or orientations. The experiments involved variations of set size, kind of context cues, precueing, and retro-cueing of the to-be-tested item. The interference model fit the data better than 2 competing models, the Slot-Averaging model and the Variable-Precision resource model. The interference model also fared well in comparison to several new models incorporating alternative theoretical assumptions. The experiments confirm 3 novel predictions of the interference model: (a) Nontargets intrude in recall to the extent that they are close to the target in context space; (b) similarity between target and nontarget features improves recall, and (c) precueing-but not retro-cueing-the target substantially reduces the set-size effect. The success of the interference model shows that working memory for continuous visual information works according to the same principles as working memory for more discrete (e.g., verbal) contents. Data and model codes are available at https://osf.io/wgqd5/. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Remembering the past and imagining the future
Byrne, Patrick; Becker, Suzanna; Burgess, Neil
2009-01-01
The neural mechanisms underlying spatial cognition are modelled, integrating neuronal, systems and behavioural data, and addressing the relationships between long-term memory, short-term memory and imagery, and between egocentric and allocentric and visual and idiothetic representations. Long-term spatial memory is modeled as attractor dynamics within medial-temporal allocentric representations, and short-term memory as egocentric parietal representations driven by perception, retrieval and imagery, and modulated by directed attention. Both encoding and retrieval/ imagery require translation between egocentric and allocentric representations, mediated by posterior parietal and retrosplenial areas and utilizing head direction representations in Papez’s circuit. Thus hippocampus effectively indexes information by real or imagined location, while Papez’s circuit translates to imagery or from perception according to the direction of view. Modulation of this translation by motor efference allows “spatial updating” of representations, while prefrontal simulated motor efference allows mental exploration. The alternating temporo-parietal flows of information are organized by the theta rhythm. Simulations demonstrate the retrieval and updating of familiar spatial scenes, hemispatial neglect in memory, and the effects on hippocampal place cell firing of lesioned head direction representations and of conflicting visual and ideothetic inputs. PMID:17500630
Shifting visual perspective during memory retrieval reduces the accuracy of subsequent memories.
Marcotti, Petra; St Jacques, Peggy L
2018-03-01
Memories for events can be retrieved from visual perspectives that were never experienced, reflecting the dynamic and reconstructive nature of memories. Characteristics of memories can be altered when shifting from an own eyes perspective, the way most events are initially experienced, to an observer perspective, in which one sees oneself in the memory. Moreover, recent evidence has linked these retrieval-related effects of visual perspective to subsequent changes in memories. Here we examine how shifting visual perspective influences the accuracy of subsequent memories for complex events encoded in the lab. Participants performed a series of mini-events that were experienced from their own eyes, and were later asked to retrieve memories for these events while maintaining the own eyes perspective or shifting to an alternative observer perspective. We then examined how shifting perspective during retrieval modified memories by influencing the accuracy of recall on a final memory test. Across two experiments, we found that shifting visual perspective reduced the accuracy of subsequent memories and that reductions in vividness when shifting visual perspective during retrieval predicted these changes in the accuracy of memories. Our findings suggest that shifting from an own eyes to an observer perspective influences the accuracy of long-term memories.
A prototype feature system for feature retrieval using relationships
Choi, J.; Usery, E.L.
2009-01-01
Using a feature data model, geographic phenomena can be represented effectively by integrating space, theme, and time. This paper extends and implements a feature data model that supports query and visualization of geographic features using their non-spatial and temporal relationships. A prototype feature-oriented geographic information system (FOGIS) is then developed and storage of features named Feature Database is designed. Buildings from the U.S. Marine Corps Base, Camp Lejeune, North Carolina and subways in Chicago, Illinois are used to test the developed system. The results of the applications show the strength of the feature data model and the developed system 'FOGIS' when they utilize non-spatial and temporal relationships in order to retrieve and visualize individual features.
Accessibility limits recall from visual working memory.
Rajsic, Jason; Swan, Garrett; Wilson, Daryl E; Pratt, Jay
2017-09-01
In this article, we demonstrate limitations of accessibility of information in visual working memory (VWM). Recently, cued-recall has been used to estimate the fidelity of information in VWM, where the feature of a cued object is reproduced from memory (Bays, Catalao, & Husain, 2009; Wilken & Ma, 2004; Zhang & Luck, 2008). Response error in these tasks has been largely studied with respect to failures of encoding and maintenance; however, the retrieval operations used in these tasks remain poorly understood. By varying the number and type of object features provided as a cue in a visual delayed-estimation paradigm, we directly assess the nature of retrieval errors in delayed estimation from VWM. Our results demonstrate that providing additional object features in a single cue reliably improves recall, largely by reducing swap, or misbinding, responses. In addition, performance simulations using the binding pool model (Swan & Wyble, 2014) were able to mimic this pattern of performance across a large span of parameter combinations, demonstrating that the binding pool provides a possible mechanism underlying this pattern of results that is not merely a symptom of one particular parametrization. We conclude that accessing visual working memory is a noisy process, and can lead to errors over and above those of encoding and maintenance limitations. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
NASA Astrophysics Data System (ADS)
Helmers, Thorben; Thöming, Jorg; Mießner, Ulrich
2017-11-01
In this article, we introduce a novel approach to retrieve spatial- and time-resolved Taylor slug flow information from a single non-invasive photometric flow sensor. The presented approach uses disperse phase surface properties to retrieve the instantaneous velocity information from a single sensor's time-scaled signal. For this purpose, a photometric sensor system is simulated using a ray-tracing algorithm to calculate spatially resolved near-infrared transmission signals. At the signal position corresponding to the rear droplet cap, a correlation factor of the droplet's geometric properties is retrieved and used to extract the instantaneous droplet velocity from the real sensor's temporal transmission signal. Furthermore, a correlation for the rear cap geometry based on the a priori known total superficial flow velocity is developed, because the cap curvature is velocity sensitive itself. Our model for velocity derivation is validated, and measurements of a first prototype showcase the capability of the device. Long-term measurements visualize systematic fluctuations in droplet lengths, velocities, and frequencies that could otherwise, without the observation on a larger timescale, have been identified as measurement errors and not systematic phenomenas.
Hutchinson, J Benjamin; Uncapher, Melina R; Wagner, Anthony D
2015-01-01
Retrieval of episodic memories is a multi-component act that relies on numerous operations ranging from processing the retrieval cue, evaluating retrieved information, and selecting the appropriate response given the demands of the task. Motivated by a rich functional neuroimaging literature, recent theorizing about various computations at retrieval has focused on the role of posterior parietal cortex (PPC). In a potentially promising line of research, recent neuroimaging findings suggest that different subregions of dorsal PPC respond distinctly to different aspects of retrieval decisions, suggesting that better understanding of their contributions might shed light on the component processes of retrieval. In an attempt to understand the basic operations performed by dorsal PPC, we used functional MRI and functional connectivity analyses to examine how activation in, and connectivity between, dorsal PPC and ventral temporal regions representing retrieval cues varies as a function of retrieval decision uncertainty. Specifically, participants made a five-point recognition confidence judgment for a series of old and new visually presented words. Consistent with prior studies, memory-related activity patterns dissociated across left dorsal PPC subregions, with activity in the lateral IPS tracking the degree to which participants perceived an item to be old, whereas activity in the SPL increased as a function of decision uncertainty. Importantly, whole-brain functional connectivity analyses further revealed that SPL activity was more strongly correlated with that in the visual word-form area during uncertain relative to certain decisions. These data suggest that the involvement of SPL during episodic retrieval reflects, at least in part, the processing of the retrieval cue, perhaps in service of attempts to increase the mnemonic evidence elicited by the cue. Copyright © 2014 Elsevier Inc. All rights reserved.
Divided attention improves delayed, but not immediate retrieval of a consolidated memory.
Kessler, Yoav; Vandermorris, Susan; Gopie, Nigel; Daros, Alexander; Winocur, Gordon; Moscovitch, Morris
2014-01-01
A well-documented dissociation between memory encoding and retrieval concerns the role of attention in the two processes. The typical finding is that divided attention (DA) during encoding impairs future memory, but retrieval is relatively robust to attentional manipulations. However, memory research in the past 20 years had demonstrated that retrieval is a memory-changing process, in which the strength and availability of information are modified by various characteristics of the retrieval process. Based on this logic, several studies examined the effects of DA during retrieval (Test 1) on a future memory test (Test 2). These studies yielded inconsistent results. The present study examined the role of memory consolidation in accounting for the after-effect of DA during retrieval. Initial learning required a classification of visual stimuli, and hence involved incidental learning. Test 1 was administered 24 hours after initial learning, and therefore required retrieval of consolidated information. Test 2 was administered either immediately following Test 1 or after a 24-hour delay. Our results show that the effect of DA on Test 2 depended on this delay. DA during Test 1 did not affect performance on Test 2 when it was administered immediately, but improved performance when Test 2 was given 24-hours later. The results are consistent with other findings showing long-term benefits of retrieval difficulty. Implications for theories of reconsolidation in human episodic memory are discussed.
Annotating image ROIs with text descriptions for multimodal biomedical document retrieval
NASA Astrophysics Data System (ADS)
You, Daekeun; Simpson, Matthew; Antani, Sameer; Demner-Fushman, Dina; Thoma, George R.
2013-01-01
Regions of interest (ROIs) that are pointed to by overlaid markers (arrows, asterisks, etc.) in biomedical images are expected to contain more important and relevant information than other regions for biomedical article indexing and retrieval. We have developed several algorithms that localize and extract the ROIs by recognizing markers on images. Cropped ROIs then need to be annotated with contents describing them best. In most cases accurate textual descriptions of the ROIs can be found from figure captions, and these need to be combined with image ROIs for annotation. The annotated ROIs can then be used to, for example, train classifiers that separate ROIs into known categories (medical concepts), or to build visual ontologies, for indexing and retrieval of biomedical articles. We propose an algorithm that pairs visual and textual ROIs that are extracted from images and figure captions, respectively. This algorithm based on dynamic time warping (DTW) clusters recognized pointers into groups, each of which contains pointers with identical visual properties (shape, size, color, etc.). Then a rule-based matching algorithm finds the best matching group for each textual ROI mention. Our method yields a precision and recall of 96% and 79%, respectively, when ground truth textual ROI data is used.
NASA Astrophysics Data System (ADS)
Anavi, Yaron; Kogan, Ilya; Gelbart, Elad; Geva, Ofer; Greenspan, Hayit
2016-03-01
We explore the combination of text metadata, such as patients' age and gender, with image-based features, for X-ray chest pathology image retrieval. We focus on a feature set extracted from a pre-trained deep convolutional network shown in earlier work to achieve state-of-the-art results. Two distance measures are explored: a descriptor-based measure, which computes the distance between image descriptors, and a classification-based measure, which performed by a comparison of the corresponding SVM classification probabilities. We show that retrieval results increase once the age and gender information combined with the features extracted from the last layers of the network, with best results using the classification-based scheme. Visualization of the X-ray data is presented by embedding the high dimensional deep learning features in a 2-D dimensional space while preserving the pairwise distances using the t-SNE algorithm. The 2-D visualization gives the unique ability to find groups of X-ray images that are similar to the query image and among themselves, which is a characteristic we do not see in a 1-D traditional ranking.
Video-assisted segmentation of speech and audio track
NASA Astrophysics Data System (ADS)
Pandit, Medha; Yusoff, Yusseri; Kittler, Josef; Christmas, William J.; Chilton, E. H. S.
1999-08-01
Video database research is commonly concerned with the storage and retrieval of visual information invovling sequence segmentation, shot representation and video clip retrieval. In multimedia applications, video sequences are usually accompanied by a sound track. The sound track contains potential cues to aid shot segmentation such as different speakers, background music, singing and distinctive sounds. These different acoustic categories can be modeled to allow for an effective database retrieval. In this paper, we address the problem of automatic segmentation of audio track of multimedia material. This audio based segmentation can be combined with video scene shot detection in order to achieve partitioning of the multimedia material into semantically significant segments.
Hsu, Nina S; Kraemer, David J M; Oliver, Robyn T; Schlichting, Margaret L; Thompson-Schill, Sharon L
2011-09-01
Neuroimaging tests of sensorimotor theories of semantic memory hinge on the extent to which similar activation patterns are observed during perception and retrieval of objects or object properties. The present study was motivated by the hypothesis that some of the seeming discrepancies across studies reflect flexibility in the systems responsible for conceptual and perceptual processing of color. Specifically, we test the hypothesis that retrieval of color knowledge can be influenced by both context (a task variable) and individual differences in cognitive style (a subject variable). In Experiment 1, we provide fMRI evidence for differential activity during color knowledge retrieval by having subjects perform a verbal task, in which context encouraged subjects to retrieve more- or less-detailed information about the colors of named common objects in a blocked experimental design. In the left fusiform, we found more activity during retrieval of more- versus less-detailed color knowledge. We also assessed preference for verbal or visual cognitive style, finding that brain activity in the left lingual gyrus significantly correlated with preference for a visual cognitive style. We replicated many of these effects in Experiment 2, in which stimuli were presented more quickly, in a random order, and in the auditory modality. This illustration of some of the factors that can influence color knowledge retrieval leads to the conclusion that tests of conceptual and perceptual overlap must consider variation in both of these processes.
Visualizing and Validating Metadata Traceability within the CDISC Standards.
Hume, Sam; Sarnikar, Surendra; Becnel, Lauren; Bennett, Dorine
2017-01-01
The Food & Drug Administration has begun requiring that electronic submissions of regulated clinical studies utilize the Clinical Data Information Standards Consortium data standards. Within regulated clinical research, traceability is a requirement and indicates that the analysis results can be traced back to the original source data. Current solutions for clinical research data traceability are limited in terms of querying, validation and visualization capabilities. This paper describes (1) the development of metadata models to support computable traceability and traceability visualizations that are compatible with industry data standards for the regulated clinical research domain, (2) adaptation of graph traversal algorithms to make them capable of identifying traceability gaps and validating traceability across the clinical research data lifecycle, and (3) development of a traceability query capability for retrieval and visualization of traceability information.
Visualizing and Validating Metadata Traceability within the CDISC Standards
Hume, Sam; Sarnikar, Surendra; Becnel, Lauren; Bennett, Dorine
2017-01-01
The Food & Drug Administration has begun requiring that electronic submissions of regulated clinical studies utilize the Clinical Data Information Standards Consortium data standards. Within regulated clinical research, traceability is a requirement and indicates that the analysis results can be traced back to the original source data. Current solutions for clinical research data traceability are limited in terms of querying, validation and visualization capabilities. This paper describes (1) the development of metadata models to support computable traceability and traceability visualizations that are compatible with industry data standards for the regulated clinical research domain, (2) adaptation of graph traversal algorithms to make them capable of identifying traceability gaps and validating traceability across the clinical research data lifecycle, and (3) development of a traceability query capability for retrieval and visualization of traceability information. PMID:28815125
User centered and ontology based information retrieval system for life sciences.
Sy, Mohameth-François; Ranwez, Sylvie; Montmain, Jacky; Regnault, Armelle; Crampes, Michel; Ranwez, Vincent
2012-01-25
Because of the increasing number of electronic resources, designing efficient tools to retrieve and exploit them is a major challenge. Some improvements have been offered by semantic Web technologies and applications based on domain ontologies. In life science, for instance, the Gene Ontology is widely exploited in genomic applications and the Medical Subject Headings is the basis of biomedical publications indexation and information retrieval process proposed by PubMed. However current search engines suffer from two main drawbacks: there is limited user interaction with the list of retrieved resources and no explanation for their adequacy to the query is provided. Users may thus be confused by the selection and have no idea on how to adapt their queries so that the results match their expectations. This paper describes an information retrieval system that relies on domain ontology to widen the set of relevant documents that is retrieved and that uses a graphical rendering of query results to favor user interactions. Semantic proximities between ontology concepts and aggregating models are used to assess documents adequacy with respect to a query. The selection of documents is displayed in a semantic map to provide graphical indications that make explicit to what extent they match the user's query; this man/machine interface favors a more interactive and iterative exploration of data corpus, by facilitating query concepts weighting and visual explanation. We illustrate the benefit of using this information retrieval system on two case studies one of which aiming at collecting human genes related to transcription factors involved in hemopoiesis pathway. The ontology based information retrieval system described in this paper (OBIRS) is freely available at: http://www.ontotoolkit.mines-ales.fr/ObirsClient/. This environment is a first step towards a user centred application in which the system enlightens relevant information to provide decision help.
User centered and ontology based information retrieval system for life sciences
2012-01-01
Background Because of the increasing number of electronic resources, designing efficient tools to retrieve and exploit them is a major challenge. Some improvements have been offered by semantic Web technologies and applications based on domain ontologies. In life science, for instance, the Gene Ontology is widely exploited in genomic applications and the Medical Subject Headings is the basis of biomedical publications indexation and information retrieval process proposed by PubMed. However current search engines suffer from two main drawbacks: there is limited user interaction with the list of retrieved resources and no explanation for their adequacy to the query is provided. Users may thus be confused by the selection and have no idea on how to adapt their queries so that the results match their expectations. Results This paper describes an information retrieval system that relies on domain ontology to widen the set of relevant documents that is retrieved and that uses a graphical rendering of query results to favor user interactions. Semantic proximities between ontology concepts and aggregating models are used to assess documents adequacy with respect to a query. The selection of documents is displayed in a semantic map to provide graphical indications that make explicit to what extent they match the user's query; this man/machine interface favors a more interactive and iterative exploration of data corpus, by facilitating query concepts weighting and visual explanation. We illustrate the benefit of using this information retrieval system on two case studies one of which aiming at collecting human genes related to transcription factors involved in hemopoiesis pathway. Conclusions The ontology based information retrieval system described in this paper (OBIRS) is freely available at: http://www.ontotoolkit.mines-ales.fr/ObirsClient/. This environment is a first step towards a user centred application in which the system enlightens relevant information to provide decision help. PMID:22373375
Presentation video retrieval using automatically recovered slide and spoken text
NASA Astrophysics Data System (ADS)
Cooper, Matthew
2013-03-01
Video is becoming a prevalent medium for e-learning. Lecture videos contain text information in both the presentation slides and lecturer's speech. This paper examines the relative utility of automatically recovered text from these sources for lecture video retrieval. To extract the visual information, we automatically detect slides within the videos and apply optical character recognition to obtain their text. Automatic speech recognition is used similarly to extract spoken text from the recorded audio. We perform controlled experiments with manually created ground truth for both the slide and spoken text from more than 60 hours of lecture video. We compare the automatically extracted slide and spoken text in terms of accuracy relative to ground truth, overlap with one another, and utility for video retrieval. Results reveal that automatically recovered slide text and spoken text contain different content with varying error profiles. Experiments demonstrate that automatically extracted slide text enables higher precision video retrieval than automatically recovered spoken text.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crossno, Patricia Joyce; Dunlavy, Daniel M.; Stanton, Eric T.
This report is a summary of the accomplishments of the 'Scalable Solutions for Processing and Searching Very Large Document Collections' LDRD, which ran from FY08 through FY10. Our goal was to investigate scalable text analysis; specifically, methods for information retrieval and visualization that could scale to extremely large document collections. Towards that end, we designed, implemented, and demonstrated a scalable framework for text analysis - ParaText - as a major project deliverable. Further, we demonstrated the benefits of using visual analysis in text analysis algorithm development, improved performance of heterogeneous ensemble models in data classification problems, and the advantages ofmore » information theoretic methods in user analysis and interpretation in cross language information retrieval. The project involved 5 members of the technical staff and 3 summer interns (including one who worked two summers). It resulted in a total of 14 publications, 3 new software libraries (2 open source and 1 internal to Sandia), several new end-user software applications, and over 20 presentations. Several follow-on projects have already begun or will start in FY11, with additional projects currently in proposal.« less
Usability of stereoscopic view in teleoperation
NASA Astrophysics Data System (ADS)
Boonsuk, Wutthigrai
2015-03-01
Recently, there are tremendous growths in the area of 3D stereoscopic visualization. The 3D stereoscopic visualization technology has been used in a growing number of consumer products such as the 3D televisions and the 3D glasses for gaming systems. This technology refers to the idea that human brain develops depth of perception by retrieving information from the two eyes. Our brain combines the left and right images on the retinas and extracts depth information. Therefore, viewing two video images taken at slightly distance apart as shown in Figure 1 can create illusion of depth [8]. Proponents of this technology argue that the stereo view of 3D visualization increases user immersion and performance as more information is gained through the 3D vision as compare to the 2D view. However, it is still uncertain if additional information gained from the 3D stereoscopic visualization can actually improve user performance in real world situations such as in the case of teleoperation.
Auditory memory can be object based.
Dyson, Benjamin J; Ishfaq, Feraz
2008-04-01
Identifying how memories are organized remains a fundamental issue in psychology. Previous work has shown that visual short-term memory is organized according to the object of origin, with participants being better at retrieving multiple pieces of information from the same object than from different objects. However, it is not yet clear whether similar memory structures are employed for other modalities, such as audition. Under analogous conditions in the auditory domain, we found that short-term memories for sound can also be organized according to object, with a same-object advantage being demonstrated for the retrieval of information in an auditory scene defined by two complex sounds overlapping in both space and time. Our results provide support for the notion of an auditory object, in addition to the continued identification of similar processing constraints across visual and auditory domains. The identification of modality-independent organizational principles of memory, such as object-based coding, suggests possible mechanisms by which the human processing system remembers multimodal experiences.
Ryan, Lee; Cox, Christine; Hayes, Scott M; Nadel, Lynn
2008-01-01
Whether or not the hippocampus participates in semantic memory retrieval has been the focus of much debate in the literature. However, few neuroimaging studies have directly compared hippocampal activation during semantic and episodic retrieval tasks that are well matched in all respects other than the source of the retrieved information. In Experiment 1, we compared hippocampal fMRI activation during a classic semantic memory task, category production, and an episodic version of the same task, category cued recall. Left hippocampal activation was observed in both episodic and semantic conditions, although other regions of the brain clearly distinguished the two tasks. Interestingly, participants reported using retrieval strategies during the semantic retrieval task that relied on autobiographical and spatial information; for example, visualizing themselves in their kitchen while producing items for the category kitchen utensils. In Experiment 2, we considered whether the use of these spatial and autobiographical retrieval strategies could have accounted for the hippocampal activation observed in Experiment 1. Categories were presented that elicited one of three retrieval strategy types, autobiographical and spatial, autobiographical and nonspatial, and neither autobiographical nor spatial. Once again, similar hippocampal activation was observed for all three category types, regardless of the inclusion of spatial or autobiographical content. We conclude that the distinction between semantic and episodic memory is more complex than classic memory models suggest.
Ryan, Lee; Cox, Christine; Hayes, Scott M.; Nadel, Lynn
2008-01-01
Whether or not the hippocampus participates in semantic memory retrieval has been the focus of much debate in the literature. However, few neuroimaging studies have directly compared hippocampal activation during semantic and episodic retrieval tasks that are well matched in all respects other than the source of the retrieved information. In Experiment 1, we compared hippocampal fMRI activation during a classic semantic memory task, category production, and an episodic version of the same task, category cued recall. Left hippocampal activation was observed in both episodic and semantic conditions, although other regions of the brain clearly distinguished the two tasks. Interestingly, participants reported using retrieval strategies during the semantic retrieval task that relied on autobiographical and spatial information; for example, visualizing themselves in their kitchen while producing items for the category kitchen utensils. In Experiment 2, we considered whether the use of these spatial and autobiographical retrieval strategies could have accounted for the hippocampal activation observed in Experiment 1. Categories were presented that elicited one of three retrieval strategy types, autobiographical and spatial, autobiographical and nonspatial, and neither autobiographical nor spatial. Once again, similar hippocampal activation was observed for all three category types, regardless of the inclusion of spatial or autobiographical content. We conclude that the distinction between semantic and episodic memory is more complex than classic memory models suggest. PMID:18420234
NASA Astrophysics Data System (ADS)
Moon, Hye Sun
Visuals are most extensively used as instructional tools in education to present spatially-based information. Recent computer technology allows the generation of 3D animated visuals to extend the presentation in computer-based instruction. Animated visuals in 3D representation not only possess motivational value that promotes positive attitudes toward instruction but also facilitate learning when the subject matter requires dynamic motion and 3D visual cue. In this study, three questions are explored: (1) how 3D graphics affects student learning and attitude, in comparison with 2D graphics; (2) how animated graphics affects student learning and attitude, in comparison with static graphics; and (3) whether the use of 3D graphics, when they are supported by interactive animation, is the most effective visual cues to improve learning and to develop positive attitudes. A total of 145 eighth-grade students participated in a 2 x 2 factorial design study. The subjects were randomly assigned to one of four computer-based instructions: 2D static; 2D animated; 3D static; and 3D animated. The results indicated that: (1) Students in the 3D graphic condition exhibited more positive attitudes toward instruction than those in the 2D graphic condition. No group differences were found between the posttest score of 3D graphic condition and that of 2D graphic condition. However, students in the 3D graphic condition took less time for information retrieval on posttest than those in the 2D graphic condition. (2) Students in the animated graphic condition exhibited slightly more positive attitudes toward instruction than those in the static graphic condition. No group differences were found between the posttest score of animated graphic condition and that of static graphic condition. However, students in the animated graphic condition took less time for information retrieval on posttest than those in the static graphic condition. (3) Students in the 3D animated graphic condition exhibited more positive attitudes toward instruction than those in other treatment conditions (2D static, 2D animated, and 3D static conditions). No group differences were found in the posttest scores among four treatment conditions. However, students in the 3D animated condition took less time for information retrieval on posttest than those in other treatment conditions.
Representational Account of Memory: Insights from Aging and Synesthesia.
Pfeifer, Gaby; Ward, Jamie; Chan, Dennis; Sigala, Natasha
2016-12-01
The representational account of memory envisages perception and memory to be on a continuum rather than in discretely divided brain systems [Bussey, T. J., & Saksida, L. M. Memory, perception, and the ventral visual-perirhinal-hippocampal stream: Thinking outside of the boxes. Hippocampus, 17, 898-908, 2007]. We tested this account using a novel between-group design with young grapheme-color synesthetes, older adults, and young controls. We investigated how the disparate sensory-perceptual abilities between these groups translated into associative memory performance for visual stimuli that do not induce synesthesia. ROI analyses of the entire ventral visual stream showed that associative retrieval (a pair-associate retrieved in the absence of a visual stimulus) yielded enhanced activity in young and older adults' visual regions relative to synesthetes, whereas associative recognition (deciding whether a visual stimulus was the correct pair-associate) was characterized by enhanced activity in synesthetes' visual regions relative to older adults. Whole-brain analyses at associative retrieval revealed an effect of age in early visual cortex, with older adults showing enhanced activity relative to synesthetes and young adults. At associative recognition, the group effect was reversed: Synesthetes showed significantly enhanced activity relative to young and older adults in early visual regions. The inverted group effects observed between retrieval and recognition indicate that reduced sensitivity in visual cortex (as in aging) comes with increased activity during top-down retrieval and decreased activity during bottom-up recognition, whereas enhanced sensitivity (as in synesthesia) shows the opposite pattern. Our results provide novel evidence for the direct contribution of perceptual mechanisms to visual associative memory based on the examples of synesthesia and aging.
Computable visually observed phenotype ontological framework for plants
2011-01-01
Background The ability to search for and precisely compare similar phenotypic appearances within and across species has vast potential in plant science and genetic research. The difficulty in doing so lies in the fact that many visual phenotypic data, especially visually observed phenotypes that often times cannot be directly measured quantitatively, are in the form of text annotations, and these descriptions are plagued by semantic ambiguity, heterogeneity, and low granularity. Though several bio-ontologies have been developed to standardize phenotypic (and genotypic) information and permit comparisons across species, these semantic issues persist and prevent precise analysis and retrieval of information. A framework suitable for the modeling and analysis of precise computable representations of such phenotypic appearances is needed. Results We have developed a new framework called the Computable Visually Observed Phenotype Ontological Framework for plants. This work provides a novel quantitative view of descriptions of plant phenotypes that leverages existing bio-ontologies and utilizes a computational approach to capture and represent domain knowledge in a machine-interpretable form. This is accomplished by means of a robust and accurate semantic mapping module that automatically maps high-level semantics to low-level measurements computed from phenotype imagery. The framework was applied to two different plant species with semantic rules mined and an ontology constructed. Rule quality was evaluated and showed high quality rules for most semantics. This framework also facilitates automatic annotation of phenotype images and can be adopted by different plant communities to aid in their research. Conclusions The Computable Visually Observed Phenotype Ontological Framework for plants has been developed for more efficient and accurate management of visually observed phenotypes, which play a significant role in plant genomics research. The uniqueness of this framework is its ability to bridge the knowledge of informaticians and plant science researchers by translating descriptions of visually observed phenotypes into standardized, machine-understandable representations, thus enabling the development of advanced information retrieval and phenotype annotation analysis tools for the plant science community. PMID:21702966
Generating descriptive visual words and visual phrases for large-scale image applications.
Zhang, Shiliang; Tian, Qi; Hua, Gang; Huang, Qingming; Gao, Wen
2011-09-01
Bag-of-visual Words (BoWs) representation has been applied for various problems in the fields of multimedia and computer vision. The basic idea is to represent images as visual documents composed of repeatable and distinctive visual elements, which are comparable to the text words. Notwithstanding its great success and wide adoption, visual vocabulary created from single-image local descriptors is often shown to be not as effective as desired. In this paper, descriptive visual words (DVWs) and descriptive visual phrases (DVPs) are proposed as the visual correspondences to text words and phrases, where visual phrases refer to the frequently co-occurring visual word pairs. Since images are the carriers of visual objects and scenes, a descriptive visual element set can be composed by the visual words and their combinations which are effective in representing certain visual objects or scenes. Based on this idea, a general framework is proposed for generating DVWs and DVPs for image applications. In a large-scale image database containing 1506 object and scene categories, the visual words and visual word pairs descriptive to certain objects or scenes are identified and collected as the DVWs and DVPs. Experiments show that the DVWs and DVPs are informative and descriptive and, thus, are more comparable with the text words than the classic visual words. We apply the identified DVWs and DVPs in several applications including large-scale near-duplicated image retrieval, image search re-ranking, and object recognition. The combination of DVW and DVP performs better than the state of the art in large-scale near-duplicated image retrieval in terms of accuracy, efficiency and memory consumption. The proposed image search re-ranking algorithm: DWPRank outperforms the state-of-the-art algorithm by 12.4% in mean average precision and about 11 times faster in efficiency.
Efficient graph-cut tattoo segmentation
NASA Astrophysics Data System (ADS)
Kim, Joonsoo; Parra, Albert; Li, He; Delp, Edward J.
2015-03-01
Law enforcement is interested in exploiting tattoos as an information source to identify, track and prevent gang-related crimes. Many tattoo image retrieval systems have been described. In a retrieval system tattoo segmentation is an important step for retrieval accuracy since segmentation removes background information in a tattoo image. Existing segmentation methods do not extract the tattoo very well when the background includes textures and color similar to skin tones. In this paper we describe a tattoo segmentation approach by determining skin pixels in regions near the tattoo. In these regions graph-cut segmentation using a skin color model and a visual saliency map is used to find skin pixels. After segmentation we determine which set of skin pixels are connected with each other that form a closed contour including a tattoo. The regions surrounded by the closed contours are considered tattoo regions. Our method segments tattoos well when the background includes textures and color similar to skin.
Bag-of-visual-ngrams for histopathology image classification
NASA Astrophysics Data System (ADS)
López-Monroy, A. Pastor; Montes-y-Gómez, Manuel; Escalante, Hugo Jair; Cruz-Roa, Angel; González, Fabio A.
2013-11-01
This paper describes an extension of the Bag-of-Visual-Words (BoVW) representation for image categorization (IC) of histophatology images. This representation is one of the most used approaches in several high-level computer vision tasks. However, the BoVW representation has an important limitation: the disregarding of spatial information among visual words. This information may be useful to capture discriminative visual-patterns in specific computer vision tasks. In order to overcome this problem we propose the use of visual n-grams. N-grams based-representations are very popular in the field of natural language processing (NLP), in particular within text mining and information retrieval. We propose building a codebook of n-grams and then representing images by histograms of visual n-grams. We evaluate our proposal in the challenging task of classifying histopathology images. The novelty of our proposal lies in the fact that we use n-grams as attributes for a classification model (together with visual-words, i.e., 1-grams). This is common practice within NLP, although, to the best of our knowledge, this idea has not been explored yet within computer vision. We report experimental results in a database of histopathology images where our proposed method outperforms the traditional BoVWs formulation.
The Quantum Binding Problem in the Context of Associative Memory
Wichert, Andreas
2016-01-01
We present a method to solve the binding problem by using a quantum algorithm for the retrieval of associations from associative memory during visual scene analysis. The problem is solved by mapping the information representing different objects into superposition by using entanglement and Grover’s amplification algorithm. PMID:27603782
Using Computers To Accommodate Learning Disabled Students in Mathematics Classes.
ERIC Educational Resources Information Center
Rapp, Rhonda H.; Gittinger, Dennis J.
A person with a learning disability usually has average or above average intelligence, but has difficulty taking in, remembering, or expressing information. Learning disabilities can involve visual processing speed, short-term memory processing, fluid reasoning, and long-term memory retrieval. These disorders are intrinsic to the individual and…
Modelling Subjectivity in Visual Perception of Orientation for Image Retrieval.
ERIC Educational Resources Information Center
Sanchez, D.; Chamorro-Martinez, J.; Vila, M. A.
2003-01-01
Discussion of multimedia libraries and the need for storage, indexing, and retrieval techniques focuses on the combination of computer vision and data mining techniques to model high-level concepts for image retrieval based on perceptual features of the human visual system. Uses fuzzy set theory to measure users' assessments and to capture users'…
TRECVID: the utility of a content-based video retrieval evaluation
NASA Astrophysics Data System (ADS)
Hauptmann, Alexander G.
2006-01-01
TRECVID, an annual retrieval evaluation benchmark organized by NIST, encourages research in information retrieval from digital video. TRECVID benchmarking covers both interactive and manual searching by end users, as well as the benchmarking of some supporting technologies including shot boundary detection, extraction of semantic features, and the automatic segmentation of TV news broadcasts. Evaluations done in the context of the TRECVID benchmarks show that generally, speech transcripts and annotations provide the single most important clue for successful retrieval. However, automatically finding the individual images is still a tremendous and unsolved challenge. The evaluations repeatedly found that none of the multimedia analysis and retrieval techniques provide a significant benefit over retrieval using only textual information such as from automatic speech recognition transcripts or closed captions. In interactive systems, we do find significant differences among the top systems, indicating that interfaces can make a huge difference for effective video/image search. For interactive tasks efficient interfaces require few key clicks, but display large numbers of images for visual inspection by the user. The text search finds the right context region in the video in general, but to select specific relevant images we need good interfaces to easily browse the storyboard pictures. In general, TRECVID has motivated the video retrieval community to be honest about what we don't know how to do well (sometimes through painful failures), and has focused us to work on the actual task of video retrieval, as opposed to flashy demos based on technological capabilities.
Using the Visual World Paradigm to Study Retrieval Interference in Spoken Language Comprehension
Sekerina, Irina A.; Campanelli, Luca; Van Dyke, Julie A.
2016-01-01
The cue-based retrieval theory (Lewis et al., 2006) predicts that interference from similar distractors should create difficulty for argument integration, however this hypothesis has only been examined in the written modality. The current study uses the Visual World Paradigm (VWP) to assess its feasibility to study retrieval interference arising from distractors present in a visual display during spoken language comprehension. The study aims to extend findings from Van Dyke and McElree (2006), which utilized a dual-task paradigm with written sentences in which they manipulated the relationship between extra-sentential distractors and the semantic retrieval cues from a verb, to the spoken modality. Results indicate that retrieval interference effects do occur in the spoken modality, manifesting immediately upon encountering the verbal retrieval cue for inaccurate trials when the distractors are present in the visual field. We also observed indicators of repair processes in trials containing semantic distractors, which were ultimately answered correctly. We conclude that the VWP is a useful tool for investigating retrieval interference effects, including both the online effects of distractors and their after-effects, when repair is initiated. This work paves the way for further studies of retrieval interference in the spoken modality, which is especially significant for examining the phenomenon in pre-reading children, non-reading adults (e.g., people with aphasia), and spoken language bilinguals. PMID:27378974
van Schie, Hein T; Wijers, Albertus A; Mars, Rogier B; Benjamins, Jeroen S; Stowe, Laurie A
2005-05-01
Event-related brain potentials were used to study the retrieval of visual semantic information to concrete words, and to investigate possible structural overlap between visual object working memory and concreteness effects in word processing. Subjects performed an object working memory task that involved 5 s retention of simple 4-angled polygons (load 1), complex 10-angled polygons (load 2), and a no-load baseline condition. During the polygon retention interval subjects were presented with a lexical decision task to auditory presented concrete (imageable) and abstract (nonimageable) words, and pseudowords. ERP results are consistent with the use of object working memory for the visualisation of concrete words. Our data indicate a two-step processing model of visual semantics in which visual descriptive information of concrete words is first encoded in semantic memory (indicated by an anterior N400 and posterior occipital positivity), and is subsequently visualised via the network for object working memory (reflected by a left frontal positive slow wave and a bilateral occipital slow wave negativity). Results are discussed in the light of contemporary models of semantic memory.
CyanoBase: the cyanobacteria genome database update 2010.
Nakao, Mitsuteru; Okamoto, Shinobu; Kohara, Mitsuyo; Fujishiro, Tsunakazu; Fujisawa, Takatomo; Sato, Shusei; Tabata, Satoshi; Kaneko, Takakazu; Nakamura, Yasukazu
2010-01-01
CyanoBase (http://genome.kazusa.or.jp/cyanobase) is the genome database for cyanobacteria, which are model organisms for photosynthesis. The database houses cyanobacteria species information, complete genome sequences, genome-scale experiment data, gene information, gene annotations and mutant information. In this version, we updated these datasets and improved the navigation and the visual display of the data views. In addition, a web service API now enables users to retrieve the data in various formats with other tools, seamlessly.
Natural language processing-based COTS software and related technologies survey.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stickland, Michael G.; Conrad, Gregory N.; Eaton, Shelley M.
Natural language processing-based knowledge management software, traditionally developed for security organizations, is now becoming commercially available. An informal survey was conducted to discover and examine current NLP and related technologies and potential applications for information retrieval, information extraction, summarization, categorization, terminology management, link analysis, and visualization for possible implementation at Sandia National Laboratories. This report documents our current understanding of the technologies, lists software vendors and their products, and identifies potential applications of these technologies.
Measuring and Predicting Tag Importance for Image Retrieval.
Li, Shangwen; Purushotham, Sanjay; Chen, Chen; Ren, Yuzhuo; Kuo, C-C Jay
2017-12-01
Textual data such as tags, sentence descriptions are combined with visual cues to reduce the semantic gap for image retrieval applications in today's Multimodal Image Retrieval (MIR) systems. However, all tags are treated as equally important in these systems, which may result in misalignment between visual and textual modalities during MIR training. This will further lead to degenerated retrieval performance at query time. To address this issue, we investigate the problem of tag importance prediction, where the goal is to automatically predict the tag importance and use it in image retrieval. To achieve this, we first propose a method to measure the relative importance of object and scene tags from image sentence descriptions. Using this as the ground truth, we present a tag importance prediction model to jointly exploit visual, semantic and context cues. The Structural Support Vector Machine (SSVM) formulation is adopted to ensure efficient training of the prediction model. Then, the Canonical Correlation Analysis (CCA) is employed to learn the relation between the image visual feature and tag importance to obtain robust retrieval performance. Experimental results on three real-world datasets show a significant performance improvement of the proposed MIR with Tag Importance Prediction (MIR/TIP) system over other MIR systems.
A neotropical Miocene pollen database employing image-based search and semantic modeling.
Han, Jing Ginger; Cao, Hongfei; Barb, Adrian; Punyasena, Surangi W; Jaramillo, Carlos; Shyu, Chi-Ren
2014-08-01
Digital microscopic pollen images are being generated with increasing speed and volume, producing opportunities to develop new computational methods that increase the consistency and efficiency of pollen analysis and provide the palynological community a computational framework for information sharing and knowledge transfer. • Mathematical methods were used to assign trait semantics (abstract morphological representations) of the images of neotropical Miocene pollen and spores. Advanced database-indexing structures were built to compare and retrieve similar images based on their visual content. A Web-based system was developed to provide novel tools for automatic trait semantic annotation and image retrieval by trait semantics and visual content. • Mathematical models that map visual features to trait semantics can be used to annotate images with morphology semantics and to search image databases with improved reliability and productivity. Images can also be searched by visual content, providing users with customized emphases on traits such as color, shape, and texture. • Content- and semantic-based image searches provide a powerful computational platform for pollen and spore identification. The infrastructure outlined provides a framework for building a community-wide palynological resource, streamlining the process of manual identification, analysis, and species discovery.
Applying the metro map to software development management
NASA Astrophysics Data System (ADS)
Aguirregoitia, Amaia; Dolado, J. Javier; Presedo, Concepción
2010-01-01
This paper presents MetroMap, a new graphical representation model for controlling and managing the software development process. Metromap uses metaphors and visual representation techniques to explore several key indicators in order to support problem detection and resolution. The resulting visualization addresses diverse management tasks, such as tracking of deviations from the plan, analysis of patterns of failure detection and correction, overall assessment of change management policies, and estimation of product quality. The proposed visualization uses a metaphor with a metro map along with various interactive techniques to represent information concerning the software development process and to deal efficiently with multivariate visual queries. Finally, the paper shows the implementation of the tool in JavaFX with data of a real project and the results of testing the tool with the aforementioned data and users attempting several information retrieval tasks. The conclusion shows the results of analyzing user response time and efficiency using the MetroMap visualization system. The utility of the tool was positively evaluated.
ERIC Educational Resources Information Center
Stock, Oliver; Roder, Brigitte; Burke, Michael; Bien, Siegfried; Rosler, Frank
2009-01-01
The present study used functional magnetic resonance imaging to delineate cortical networks that are activated when objects or spatial locations encoded either visually (visual encoding group, n = 10) or haptically (haptic encoding group, n = 10) had to be retrieved from long-term memory. Participants learned associations between auditorily…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tourassi, Georgia D.; Harrawood, Brian; Singh, Swatee
The purpose of this study was to evaluate image similarity measures employed in an information-theoretic computer-assisted detection (IT-CAD) scheme. The scheme was developed for content-based retrieval and detection of masses in screening mammograms. The study is aimed toward an interactive clinical paradigm where physicians query the proposed IT-CAD scheme on mammographic locations that are either visually suspicious or indicated as suspicious by other cuing CAD systems. The IT-CAD scheme provides an evidence-based, second opinion for query mammographic locations using a knowledge database of mass and normal cases. In this study, eight entropy-based similarity measures were compared with respect to retrievalmore » precision and detection accuracy using a database of 1820 mammographic regions of interest. The IT-CAD scheme was then validated on a separate database for false positive reduction of progressively more challenging visual cues generated by an existing, in-house mass detection system. The study showed that the image similarity measures fall into one of two categories; one category is better suited to the retrieval of semantically similar cases while the second is more effective with knowledge-based decisions regarding the presence of a true mass in the query location. In addition, the IT-CAD scheme yielded a substantial reduction in false-positive detections while maintaining high detection rate for malignant masses.« less
Landmark Image Retrieval by Jointing Feature Refinement and Multimodal Classifier Learning.
Zhang, Xiaoming; Wang, Senzhang; Li, Zhoujun; Ma, Shuai; Xiaoming Zhang; Senzhang Wang; Zhoujun Li; Shuai Ma; Ma, Shuai; Zhang, Xiaoming; Wang, Senzhang; Li, Zhoujun
2018-06-01
Landmark retrieval is to return a set of images with their landmarks similar to those of the query images. Existing studies on landmark retrieval focus on exploiting the geometries of landmarks for visual similarity matches. However, the visual content of social images is of large diversity in many landmarks, and also some images share common patterns over different landmarks. On the other side, it has been observed that social images usually contain multimodal contents, i.e., visual content and text tags, and each landmark has the unique characteristic of both visual content and text content. Therefore, the approaches based on similarity matching may not be effective in this environment. In this paper, we investigate whether the geographical correlation among the visual content and the text content could be exploited for landmark retrieval. In particular, we propose an effective multimodal landmark classification paradigm to leverage the multimodal contents of social image for landmark retrieval, which integrates feature refinement and landmark classifier with multimodal contents by a joint model. The geo-tagged images are automatically labeled for classifier learning. Visual features are refined based on low rank matrix recovery, and multimodal classification combined with group sparse is learned from the automatically labeled images. Finally, candidate images are ranked by combining classification result and semantic consistence measuring between the visual content and text content. Experiments on real-world datasets demonstrate the superiority of the proposed approach as compared to existing methods.
Guided Iterative Substructure Search (GI-SSS) - A New Trick for an Old Dog.
Weskamp, Nils
2016-07-01
Substructure search (SSS) is a fundamental technique supported by various chemical information systems. Many users apply it in an iterative manner: they modify their queries to shape the composition of the retrieved hit sets according to their needs. We propose and evaluate two heuristic extensions of SSS aimed at simplifying these iterative query modifications by collecting additional information during query processing and visualizing this information in an intuitive way. This gives the user a convenient feedback on how certain changes to the query would affect the retrieved hit set and reduces the number of trial-and-error cycles needed to generate an optimal search result. The proposed heuristics are simple, yet surprisingly effective and can be easily added to existing SSS implementations. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Visual perception and imagery: a new molecular hypothesis.
Bókkon, I
2009-05-01
Here, we put forward a redox molecular hypothesis about the natural biophysical substrate of visual perception and visual imagery. This hypothesis is based on the redox and bioluminescent processes of neuronal cells in retinotopically organized cytochrome oxidase-rich visual areas. Our hypothesis is in line with the functional roles of reactive oxygen and nitrogen species in living cells that are not part of haphazard process, but rather a very strict mechanism used in signaling pathways. We point out that there is a direct relationship between neuronal activity and the biophoton emission process in the brain. Electrical and biochemical processes in the brain represent sensory information from the external world. During encoding or retrieval of information, electrical signals of neurons can be converted into synchronized biophoton signals by bioluminescent radical and non-radical processes. Therefore, information in the brain appears not only as an electrical (chemical) signal but also as a regulated biophoton (weak optical) signal inside neurons. During visual perception, the topological distribution of photon stimuli on the retina is represented by electrical neuronal activity in retinotopically organized visual areas. These retinotopic electrical signals in visual neurons can be converted into synchronized biophoton signals by radical and non-radical processes in retinotopically organized mitochondria-rich areas. As a result, regulated bioluminescent biophotons can create intrinsic pictures (depictive representation) in retinotopically organized cytochrome oxidase-rich visual areas during visual imagery and visual perception. The long-term visual memory is interpreted as epigenetic information regulated by free radicals and redox processes. This hypothesis does not claim to solve the secret of consciousness, but proposes that the evolution of higher levels of complexity made the intrinsic picture representation of the external visual world possible by regulated redox and bioluminescent reactions in the visual system during visual perception and visual imagery.
Nonspecific verbal cues alleviate forgetting by young children.
Morgan, Kirstie; Hayne, Harlene
2007-11-01
Verbal reminders play a pervasive role in memory retrieval by human adults. In fact, relatively nonspecific verbal information (e.g. 'Remember the last time we ate at that restaurant?') will often cue vivid recollections of a past event even when presented outside the original encoding context. Although research has shown that memory retrieval by young children can be initiated by physical cues and by highly specific verbal cues, the effect of less specific verbal cues is not known. Using a Visual Recognition Memory (VRM) procedure, we examined the effect of nonspecific verbal cues on memory retrieval by 4-year-old children. Our findings showed that nonspecific verbal cues were as effective as highly specific nonverbal cues in facilitating memory retrieval after a 2-week delay. We conclude that, at least by 4 years of age, children are able to use nonspecific verbal reminders to cue memory retrieval, and that the VRM paradigm may be particularly valuable in examining the age at which this initially occurs.
van Lamsweerde, Amanda E; Beck, Melissa R; Elliott, Emily M
2015-02-01
The ability to remember feature bindings is an important measure of the ability to maintain objects in working memory (WM). In this study, we investigated whether both object- and feature-based representations are maintained in WM. Specifically, we tested the hypotheses that retaining a greater number of feature representations (i.e., both as individual features and bound representations) results in a more robust representation of individual features than of feature bindings, and that retrieving information from long-term memory (LTM) into WM would cause a greater disruption to feature bindings. In four experiments, we examined the effects of retrieving a word from LTM on shape and color-shape binding change detection performance. We found that binding changes were more difficult to detect than individual-feature changes overall, but that the cost of retrieving a word from LTM was the same for both individual-feature and binding changes.
Dictionary Pruning with Visual Word Significance for Medical Image Retrieval
Zhang, Fan; Song, Yang; Cai, Weidong; Hauptmann, Alexander G.; Liu, Sidong; Pujol, Sonia; Kikinis, Ron; Fulham, Michael J; Feng, David Dagan; Chen, Mei
2016-01-01
Content-based medical image retrieval (CBMIR) is an active research area for disease diagnosis and treatment but it can be problematic given the small visual variations between anatomical structures. We propose a retrieval method based on a bag-of-visual-words (BoVW) to identify discriminative characteristics between different medical images with Pruned Dictionary based on Latent Semantic Topic description. We refer to this as the PD-LST retrieval. Our method has two main components. First, we calculate a topic-word significance value for each visual word given a certain latent topic to evaluate how the word is connected to this latent topic. The latent topics are learnt, based on the relationship between the images and words, and are employed to bridge the gap between low-level visual features and high-level semantics. These latent topics describe the images and words semantically and can thus facilitate more meaningful comparisons between the words. Second, we compute an overall-word significance value to evaluate the significance of a visual word within the entire dictionary. We designed an iterative ranking method to measure overall-word significance by considering the relationship between all latent topics and words. The words with higher values are considered meaningful with more significant discriminative power in differentiating medical images. We evaluated our method on two public medical imaging datasets and it showed improved retrieval accuracy and efficiency. PMID:27688597
Dictionary Pruning with Visual Word Significance for Medical Image Retrieval.
Zhang, Fan; Song, Yang; Cai, Weidong; Hauptmann, Alexander G; Liu, Sidong; Pujol, Sonia; Kikinis, Ron; Fulham, Michael J; Feng, David Dagan; Chen, Mei
2016-02-12
Content-based medical image retrieval (CBMIR) is an active research area for disease diagnosis and treatment but it can be problematic given the small visual variations between anatomical structures. We propose a retrieval method based on a bag-of-visual-words (BoVW) to identify discriminative characteristics between different medical images with Pruned Dictionary based on Latent Semantic Topic description. We refer to this as the PD-LST retrieval. Our method has two main components. First, we calculate a topic-word significance value for each visual word given a certain latent topic to evaluate how the word is connected to this latent topic. The latent topics are learnt, based on the relationship between the images and words, and are employed to bridge the gap between low-level visual features and high-level semantics. These latent topics describe the images and words semantically and can thus facilitate more meaningful comparisons between the words. Second, we compute an overall-word significance value to evaluate the significance of a visual word within the entire dictionary. We designed an iterative ranking method to measure overall-word significance by considering the relationship between all latent topics and words. The words with higher values are considered meaningful with more significant discriminative power in differentiating medical images. We evaluated our method on two public medical imaging datasets and it showed improved retrieval accuracy and efficiency.
CyanoBase: the cyanobacteria genome database update 2010
Nakao, Mitsuteru; Okamoto, Shinobu; Kohara, Mitsuyo; Fujishiro, Tsunakazu; Fujisawa, Takatomo; Sato, Shusei; Tabata, Satoshi; Kaneko, Takakazu; Nakamura, Yasukazu
2010-01-01
CyanoBase (http://genome.kazusa.or.jp/cyanobase) is the genome database for cyanobacteria, which are model organisms for photosynthesis. The database houses cyanobacteria species information, complete genome sequences, genome-scale experiment data, gene information, gene annotations and mutant information. In this version, we updated these datasets and improved the navigation and the visual display of the data views. In addition, a web service API now enables users to retrieve the data in various formats with other tools, seamlessly. PMID:19880388
Mobile object retrieval in server-based image databases
NASA Astrophysics Data System (ADS)
Manger, D.; Pagel, F.; Widak, H.
2013-05-01
The increasing number of mobile phones equipped with powerful cameras leads to huge collections of user-generated images. To utilize the information of the images on site, image retrieval systems are becoming more and more popular to search for similar objects in an own image database. As the computational performance and the memory capacity of mobile devices are constantly increasing, this search can often be performed on the device itself. This is feasible, for example, if the images are represented with global image features or if the search is done using EXIF or textual metadata. However, for larger image databases, if multiple users are meant to contribute to a growing image database or if powerful content-based image retrieval methods with local features are required, a server-based image retrieval backend is needed. In this work, we present a content-based image retrieval system with a client server architecture working with local features. On the server side, the scalability to large image databases is addressed with the popular bag-of-word model with state-of-the-art extensions. The client end of the system focuses on a lightweight user interface presenting the most similar images of the database highlighting the visual information which is common with the query image. Additionally, new images can be added to the database making it a powerful and interactive tool for mobile contentbased image retrieval.
Retrieving spin textures on curved magnetic thin films with full-field soft X-ray microscopies
Streubel, Robert; Kronast, Florian; Fischer, Peter; ...
2015-07-03
X-ray tomography is a well-established technique to characterize 3D structures in material sciences and biology; its magnetic analogue—magnetic X-ray tomography—is yet to be developed. We demonstrate the visualization and reconstruction of magnetic domain structures in a 3D curved magnetic thin films with tubular shape by means of full-field soft X-ray microscopies. In the 3D arrangement of the magnetization is retrieved from a set of 2D projections by analysing the evolution of the magnetic contrast with varying projection angle. By using reconstruction algorithms to analyse the angular evolution of 2D projections provides quantitative information about domain patterns and magnetic coupling phenomenamore » between windings of azimuthally and radially magnetized tubular objects. In conclusion, the present approach represents a first milestone towards visualizing magnetization textures of 3D curved thin films with virtually arbitrary shape.« less
Compression-based integral curve data reuse framework for flow visualization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hong, Fan; Bi, Chongke; Guo, Hanqi
Currently, by default, integral curves are repeatedly re-computed in different flow visualization applications, such as FTLE field computation, source-destination queries, etc., leading to unnecessary resource cost. We present a compression-based data reuse framework for integral curves, to greatly reduce their retrieval cost, especially in a resource-limited environment. In our design, a hierarchical and hybrid compression scheme is proposed to balance three objectives, including high compression ratio, controllable error, and low decompression cost. Specifically, we use and combine digitized curve sparse representation, floating-point data compression, and octree space partitioning to adaptively achieve the objectives. Results have shown that our data reusemore » framework could acquire tens of times acceleration in the resource-limited environment compared to on-the-fly particle tracing, and keep controllable information loss. Moreover, our method could provide fast integral curve retrieval for more complex data, such as unstructured mesh data.« less
Retrieving spin textures on curved magnetic thin films with full-field soft X-ray microscopies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Streubel, Robert; Kronast, Florian; Fischer, Peter
X-ray tomography is a well-established technique to characterize 3D structures in material sciences and biology; its magnetic analogue—magnetic X-ray tomography—is yet to be developed. We demonstrate the visualization and reconstruction of magnetic domain structures in a 3D curved magnetic thin films with tubular shape by means of full-field soft X-ray microscopies. In the 3D arrangement of the magnetization is retrieved from a set of 2D projections by analysing the evolution of the magnetic contrast with varying projection angle. By using reconstruction algorithms to analyse the angular evolution of 2D projections provides quantitative information about domain patterns and magnetic coupling phenomenamore » between windings of azimuthally and radially magnetized tubular objects. In conclusion, the present approach represents a first milestone towards visualizing magnetization textures of 3D curved thin films with virtually arbitrary shape.« less
Medical Image Retrieval: A Multimodal Approach
Cao, Yu; Steffey, Shawn; He, Jianbiao; Xiao, Degui; Tao, Cui; Chen, Ping; Müller, Henning
2014-01-01
Medical imaging is becoming a vital component of war on cancer. Tremendous amounts of medical image data are captured and recorded in a digital format during cancer care and cancer research. Facing such an unprecedented volume of image data with heterogeneous image modalities, it is necessary to develop effective and efficient content-based medical image retrieval systems for cancer clinical practice and research. While substantial progress has been made in different areas of content-based image retrieval (CBIR) research, direct applications of existing CBIR techniques to the medical images produced unsatisfactory results, because of the unique characteristics of medical images. In this paper, we develop a new multimodal medical image retrieval approach based on the recent advances in the statistical graphic model and deep learning. Specifically, we first investigate a new extended probabilistic Latent Semantic Analysis model to integrate the visual and textual information from medical images to bridge the semantic gap. We then develop a new deep Boltzmann machine-based multimodal learning model to learn the joint density model from multimodal information in order to derive the missing modality. Experimental results with large volume of real-world medical images have shown that our new approach is a promising solution for the next-generation medical imaging indexing and retrieval system. PMID:26309389
Towards brain-activity-controlled information retrieval: Decoding image relevance from MEG signals.
Kauppi, Jukka-Pekka; Kandemir, Melih; Saarinen, Veli-Matti; Hirvenkari, Lotta; Parkkonen, Lauri; Klami, Arto; Hari, Riitta; Kaski, Samuel
2015-05-15
We hypothesize that brain activity can be used to control future information retrieval systems. To this end, we conducted a feasibility study on predicting the relevance of visual objects from brain activity. We analyze both magnetoencephalographic (MEG) and gaze signals from nine subjects who were viewing image collages, a subset of which was relevant to a predetermined task. We report three findings: i) the relevance of an image a subject looks at can be decoded from MEG signals with performance significantly better than chance, ii) fusion of gaze-based and MEG-based classifiers significantly improves the prediction performance compared to using either signal alone, and iii) non-linear classification of the MEG signals using Gaussian process classifiers outperforms linear classification. These findings break new ground for building brain-activity-based interactive image retrieval systems, as well as for systems utilizing feedback both from brain activity and eye movements. Copyright © 2015 Elsevier Inc. All rights reserved.
Semantic extraction and processing of medical records for patient-oriented visual index
NASA Astrophysics Data System (ADS)
Zheng, Weilin; Dong, Wenjie; Chen, Xiangjiao; Zhang, Jianguo
2012-02-01
To have comprehensive and completed understanding healthcare status of a patient, doctors need to search patient medical records from different healthcare information systems, such as PACS, RIS, HIS, USIS, as a reference of diagnosis and treatment decisions for the patient. However, it is time-consuming and tedious to do these procedures. In order to solve this kind of problems, we developed a patient-oriented visual index system (VIS) to use the visual technology to show health status and to retrieve the patients' examination information stored in each system with a 3D human model. In this presentation, we present a new approach about how to extract the semantic and characteristic information from the medical record systems such as RIS/USIS to create the 3D Visual Index. This approach includes following steps: (1) Building a medical characteristic semantic knowledge base; (2) Developing natural language processing (NLP) engine to perform semantic analysis and logical judgment on text-based medical records; (3) Applying the knowledge base and NLP engine on medical records to extract medical characteristics (e.g., the positive focus information), and then mapping extracted information to related organ/parts of 3D human model to create the visual index. We performed the testing procedures on 559 samples of radiological reports which include 853 focuses, and achieved 828 focuses' information. The successful rate of focus extraction is about 97.1%.
Griffon, Nicolas; Kerdelhué, Gaétan; Hamek, Saliha; Hassler, Sylvain; Boog, César; Lamy, Jean-Baptiste; Duclos, Catherine; Venot, Alain; Darmoni, Stéfan J
2014-10-01
Doc'CISMeF (DC) is a semantic search engine used to find resources in CISMeF-BP, a quality controlled health gateway, which gathers guidelines available on the internet in French. Visualization of Concepts in Medicine (VCM) is an iconic language that may ease information retrieval tasks. This study aimed to describe the creation and evaluation of an interface integrating VCM in DC in order to make this search engine much easier to use. Focus groups were organized to suggest ways to enhance information retrieval tasks using VCM in DC. A VCM interface was created and improved using the ergonomic evaluation approach. 20 physicians were recruited to compare the VCM interface with the non-VCM one. Each evaluator answered two different clinical scenarios in each interface. The ability and time taken to select a relevant resource were recorded and compared. A usability analysis was performed using the System Usability Scale (SUS). The VCM interface contains a filter based on icons, and icons describing each resource according to focus group recommendations. Some ergonomic issues were resolved before evaluation. Use of VCM significantly increased the success of information retrieval tasks (OR=11; 95% CI 1.4 to 507). Nonetheless, it took significantly more time to find a relevant resource with VCM interface (101 vs 65 s; p=0.02). SUS revealed 'good' usability with an average score of 74/100. VCM was successfully implemented in DC as an option. It increased the success rate of information retrieval tasks, despite requiring slightly more time, and was well accepted by end-users. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Hyperspectral remote sensing image retrieval system using spectral and texture features.
Zhang, Jing; Geng, Wenhao; Liang, Xi; Li, Jiafeng; Zhuo, Li; Zhou, Qianlan
2017-06-01
Although many content-based image retrieval systems have been developed, few studies have focused on hyperspectral remote sensing images. In this paper, a hyperspectral remote sensing image retrieval system based on spectral and texture features is proposed. The main contributions are fourfold: (1) considering the "mixed pixel" in the hyperspectral image, endmembers as spectral features are extracted by an improved automatic pixel purity index algorithm, then the texture features are extracted with the gray level co-occurrence matrix; (2) similarity measurement is designed for the hyperspectral remote sensing image retrieval system, in which the similarity of spectral features is measured with the spectral information divergence and spectral angle match mixed measurement and in which the similarity of textural features is measured with Euclidean distance; (3) considering the limited ability of the human visual system, the retrieval results are returned after synthesizing true color images based on the hyperspectral image characteristics; (4) the retrieval results are optimized by adjusting the feature weights of similarity measurements according to the user's relevance feedback. The experimental results on NASA data sets can show that our system can achieve comparable superior retrieval performance to existing hyperspectral analysis schemes.
Huala, Eva; Dickerman, Allan W.; Garcia-Hernandez, Margarita; Weems, Danforth; Reiser, Leonore; LaFond, Frank; Hanley, David; Kiphart, Donald; Zhuang, Mingzhe; Huang, Wen; Mueller, Lukas A.; Bhattacharyya, Debika; Bhaya, Devaki; Sobral, Bruno W.; Beavis, William; Meinke, David W.; Town, Christopher D.; Somerville, Chris; Rhee, Seung Yon
2001-01-01
Arabidopsis thaliana, a small annual plant belonging to the mustard family, is the subject of study by an estimated 7000 researchers around the world. In addition to the large body of genetic, physiological and biochemical data gathered for this plant, it will be the first higher plant genome to be completely sequenced, with completion expected at the end of the year 2000. The sequencing effort has been coordinated by an international collaboration, the Arabidopsis Genome Initiative (AGI). The rationale for intensive investigation of Arabidopsis is that it is an excellent model for higher plants. In order to maximize use of the knowledge gained about this plant, there is a need for a comprehensive database and information retrieval and analysis system that will provide user-friendly access to Arabidopsis information. This paper describes the initial steps we have taken toward realizing these goals in a project called The Arabidopsis Information Resource (TAIR) (www.arabidopsis.org). PMID:11125061
NASA Astrophysics Data System (ADS)
Chu, C.; Sun-Mack, S.; Chen, Y.; Heckert, E.; Doelling, D. R.
2017-12-01
In Langley NASA, Clouds and the Earth's Radiant Energy System (CERES) and Moderate Resolution Imaging Spectroradiometer (MODIS) are merged with Cloud-Aerosol Lidar with Orthogonal Polarization (CALIOP) on the Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation (CALIPSO) and CloudSat Cloud Profiling Radar (CPR). The CERES merged product (C3M) matches up to three CALIPSO footprints with each MODIS pixel along its ground track. It then assigns the nearest CloudSat footprint to each of those MODIS pixels. The cloud properties from MODIS, retrieved using the CERES algorithms, are included in C3M with the matched CALIPSO and CloudSat products along with radiances from 18 MODIS channels. The dataset is used to validate the CERES retrieved MODIS cloud properties and the computed TOA and surface flux difference using MODIS or CALIOP/CloudSAT retrieved clouds. This information is then used to tune the computed fluxes to match the CERES observed TOA flux. A visualization tool will be invaluable to determine the cause of these large cloud and flux differences in order to improve the methodology. This effort is part of larger effort to allow users to order the CERES C3M product sub-setted by time and parameter as well as the previously mentioned visualization capabilities. This presentation will show a new graphical 3D-interface, 3D-CERESVis, that allows users to view both passive remote sensing satellites (MODIS and CERES) and active satellites (CALIPSO and CloudSat), such that the detailed vertical structures of cloud properties from CALIPSO and CloudSat are displayed side by side with horizontally retrieved cloud properties from MODIS and CERES. Similarly, the CERES computed profile fluxes whether using MODIS or CALIPSO and CloudSat clouds can also be compared. 3D-CERESVis is a browser-based visualization tool that makes uses of techniques such as multiple synchronized cursors, COLLADA format data and Cesium.
Data systems and computer science programs: Overview
NASA Technical Reports Server (NTRS)
Smith, Paul H.; Hunter, Paul
1991-01-01
An external review of the Integrated Technology Plan for the Civil Space Program is presented. The topics are presented in viewgraph form and include the following: onboard memory and storage technology; advanced flight computers; special purpose flight processors; onboard networking and testbeds; information archive, access, and retrieval; visualization; neural networks; software engineering; and flight control and operations.
Lange, G; Waked, W; Kirshblum, S; DeLuca, J
2000-01-01
To examine how organizational strategy at encoding influences visual memory performance in stroke patients. Case control study. Postacute rehabilitation hospital. Stroke patients with right hemisphere damage (n = 20) versus left hemisphere damage (n = 15), and stroke patients with cortical damage (n = 11) versus subcortical damage (n = 19). Organizational strategy scores, recall performance on the Rey-Osterrieth Complex Figure (ROCF). Results demonstrated significantly greater organizational impairment and less accurate copy performance (i.e., encoding of visuospatial information on the ROCF) in the right compared to the left hemisphere group, and in the cortical relative to the subcortical group. Organizational strategy and copy accuracy scores were significantly related to each other. The absolute amount of immediate and delayed recall was significantly associated with poor organizational strategy scores. However, relative to the amount of visual information originally encoded, memory performances did not differ between groups. These findings suggest that visual memory impairments after stroke may be caused by a lack of organizational strategy affecting information encoding, rather than an impairment in memory storage or retrieval.
Conjunctive patches subspace learning with side information for collaborative image retrieval.
Zhang, Lining; Wang, Lipo; Lin, Weisi
2012-08-01
Content-Based Image Retrieval (CBIR) has attracted substantial attention during the past few years for its potential practical applications to image management. A variety of Relevance Feedback (RF) schemes have been designed to bridge the semantic gap between the low-level visual features and the high-level semantic concepts for an image retrieval task. Various Collaborative Image Retrieval (CIR) schemes aim to utilize the user historical feedback log data with similar and dissimilar pairwise constraints to improve the performance of a CBIR system. However, existing subspace learning approaches with explicit label information cannot be applied for a CIR task, although the subspace learning techniques play a key role in various computer vision tasks, e.g., face recognition and image classification. In this paper, we propose a novel subspace learning framework, i.e., Conjunctive Patches Subspace Learning (CPSL) with side information, for learning an effective semantic subspace by exploiting the user historical feedback log data for a CIR task. The CPSL can effectively integrate the discriminative information of labeled log images, the geometrical information of labeled log images and the weakly similar information of unlabeled images together to learn a reliable subspace. We formally formulate this problem into a constrained optimization problem and then present a new subspace learning technique to exploit the user historical feedback log data. Extensive experiments on both synthetic data sets and a real-world image database demonstrate the effectiveness of the proposed scheme in improving the performance of a CBIR system by exploiting the user historical feedback log data.
Recapitulation of Emotional Source Context during Memory Retrieval
Bowen, Holly J.; Kensinger, Elizabeth A.
2016-01-01
Recapitulation involves the reactivation of cognitive and neural encoding processes at retrieval. In the current study, we investigated the effects of emotional valence on recapitulation processes. Participants encoded neutral words presented on a background face or scene that was negative, positive or neutral. During retrieval, studied and novel neutral words were presented alone (i.e., without the scene or face) and participants were asked to make a remember, know or new judgment. Both the encoding and retrieval tasks were completed in the fMRI scanner. Conjunction analyses were used to reveal the overlap between encoding and retrieval processing. These results revealed that, compared to positive or neutral contexts, words that were recollected and previously encoded in a negative context showed greater encoding-to-retrieval overlap, including in the ventral visual stream and amygdala. Interestingly, the visual stream recapitulation was not enhanced within regions that specifically process faces or scenes but rather extended broadly throughout visual cortices. These findings elucidate how memories for negative events can feel more vivid or detailed than positive or neutral memories. PMID:27923474
Update on Genomic Databases and Resources at the National Center for Biotechnology Information.
Tatusova, Tatiana
2016-01-01
The National Center for Biotechnology Information (NCBI), as a primary public repository of genomic sequence data, collects and maintains enormous amounts of heterogeneous data. Data for genomes, genes, gene expressions, gene variation, gene families, proteins, and protein domains are integrated with the analytical, search, and retrieval resources through the NCBI website, text-based search and retrieval system, provides a fast and easy way to navigate across diverse biological databases.Comparative genome analysis tools lead to further understanding of evolution processes quickening the pace of discovery. Recent technological innovations have ignited an explosion in genome sequencing that has fundamentally changed our understanding of the biology of living organisms. This huge increase in DNA sequence data presents new challenges for the information management system and the visualization tools. New strategies have been designed to bring an order to this genome sequence shockwave and improve the usability of associated data.
Wee, Natalie; Asplund, Christopher L; Chee, Michael W L
2013-06-01
Visual short-term memory (VSTM) is an important measure of information processing capacity and supports many higher-order cognitive processes. We examined how sleep deprivation (SD) and maintenance duration interact to influence the number and precision of items in VSTM using an experimental design that limits the contribution of lapses at encoding. For each trial, participants attempted to maintain the location and color of three stimuli over a delay. After a retention interval of either 1 or 10 seconds, participants reported the color of the item at the cued location by selecting it on a color wheel. The probability of reporting the probed item, the precision of report, and the probability of reporting a nonprobed item were determined using a mixture-modeling analysis. Participants were studied twice in counterbalanced order, once after a night of normal sleep and once following a night of sleep deprivation. Sleep laboratory. Nineteen healthy college age volunteers (seven females) with regular sleep patterns. Approximately 24 hours of total SD. SD selectively reduced the number of integrated representations that can be retrieved after a delay, while leaving the precision of object information in the stored representations intact. Delay interacted with SD to lower the rate of successful recall. Visual short-term memory is compromised during sleep deprivation, an effect compounded by delay. However, when memories are retrieved, they tend to be intact.
Learning Short Binary Codes for Large-scale Image Retrieval.
Liu, Li; Yu, Mengyang; Shao, Ling
2017-03-01
Large-scale visual information retrieval has become an active research area in this big data era. Recently, hashing/binary coding algorithms prove to be effective for scalable retrieval applications. Most existing hashing methods require relatively long binary codes (i.e., over hundreds of bits, sometimes even thousands of bits) to achieve reasonable retrieval accuracies. However, for some realistic and unique applications, such as on wearable or mobile devices, only short binary codes can be used for efficient image retrieval due to the limitation of computational resources or bandwidth on these devices. In this paper, we propose a novel unsupervised hashing approach called min-cost ranking (MCR) specifically for learning powerful short binary codes (i.e., usually the code length shorter than 100 b) for scalable image retrieval tasks. By exploring the discriminative ability of each dimension of data, MCR can generate one bit binary code for each dimension and simultaneously rank the discriminative separability of each bit according to the proposed cost function. Only top-ranked bits with minimum cost-values are then selected and grouped together to compose the final salient binary codes. Extensive experimental results on large-scale retrieval demonstrate that MCR can achieve comparative performance as the state-of-the-art hashing algorithms but with significantly shorter codes, leading to much faster large-scale retrieval.
A new method for text detection and recognition in indoor scene for assisting blind people
NASA Astrophysics Data System (ADS)
Jabnoun, Hanen; Benzarti, Faouzi; Amiri, Hamid
2017-03-01
Developing assisting system of handicapped persons become a challenging ask in research projects. Recently, a variety of tools are designed to help visually impaired or blind people object as a visual substitution system. The majority of these tools are based on the conversion of input information into auditory or tactile sensory information. Furthermore, object recognition and text retrieval are exploited in the visual substitution systems. Text detection and recognition provides the description of the surrounding environments, so that the blind person can readily recognize the scene. In this work, we aim to introduce a method for detecting and recognizing text in indoor scene. The process consists on the detection of the regions of interest that should contain the text using the connected component. Then, the text detection is provided by employing the images correlation. This component of an assistive blind person should be simple, so that the users are able to obtain the most informative feedback within the shortest time.
Sneve, Markus H; Magnussen, Svein; Alnæs, Dag; Endestad, Tor; D'Esposito, Mark
2013-11-01
Visual STM of simple features is achieved through interactions between retinotopic visual cortex and a set of frontal and parietal regions. In the present fMRI study, we investigated effective connectivity between central nodes in this network during the different task epochs of a modified delayed orientation discrimination task. Our univariate analyses demonstrate that the inferior frontal junction (IFJ) is preferentially involved in memory encoding, whereas activity in the putative FEFs and anterior intraparietal sulcus (aIPS) remains elevated throughout periods of memory maintenance. We have earlier reported, using the same task, that areas in visual cortex sustain information about task-relevant stimulus properties during delay intervals [Sneve, M. H., Alnæs, D., Endestad, T., Greenlee, M. W., & Magnussen, S. Visual short-term memory: Activity supporting encoding and maintenance in retinotopic visual cortex. Neuroimage, 63, 166-178, 2012]. To elucidate the temporal dynamics of the IFJ-FEF-aIPS-visual cortex network during memory operations, we estimated Granger causality effects between these regions with fMRI data representing memory encoding/maintenance as well as during memory retrieval. We also investigated a set of control conditions involving active processing of stimuli not associated with a memory task and passive viewing. In line with the developing understanding of IFJ as a region critical for control processes with a possible initiating role in visual STM operations, we observed influence from IFJ to FEF and aIPS during memory encoding. Furthermore, FEF predicted activity in a set of higher-order visual areas during memory retrieval, a finding consistent with its suggested role in top-down biasing of sensory cortex.
A neotropical Miocene pollen database employing image-based search and semantic modeling1
Han, Jing Ginger; Cao, Hongfei; Barb, Adrian; Punyasena, Surangi W.; Jaramillo, Carlos; Shyu, Chi-Ren
2014-01-01
• Premise of the study: Digital microscopic pollen images are being generated with increasing speed and volume, producing opportunities to develop new computational methods that increase the consistency and efficiency of pollen analysis and provide the palynological community a computational framework for information sharing and knowledge transfer. • Methods: Mathematical methods were used to assign trait semantics (abstract morphological representations) of the images of neotropical Miocene pollen and spores. Advanced database-indexing structures were built to compare and retrieve similar images based on their visual content. A Web-based system was developed to provide novel tools for automatic trait semantic annotation and image retrieval by trait semantics and visual content. • Results: Mathematical models that map visual features to trait semantics can be used to annotate images with morphology semantics and to search image databases with improved reliability and productivity. Images can also be searched by visual content, providing users with customized emphases on traits such as color, shape, and texture. • Discussion: Content- and semantic-based image searches provide a powerful computational platform for pollen and spore identification. The infrastructure outlined provides a framework for building a community-wide palynological resource, streamlining the process of manual identification, analysis, and species discovery. PMID:25202648
Similarity, not complexity, determines visual working memory performance.
Jackson, Margaret C; Linden, David E J; Roberts, Mark V; Kriegeskorte, Nikolaus; Haenschel, Corinna
2015-11-01
A number of studies have shown that visual working memory (WM) is poorer for complex versus simple items, traditionally accounted for by higher information load placing greater demands on encoding and storage capacity limits. Other research suggests that it may not be complexity that determines WM performance per se, but rather increased perceptual similarity between complex items as a result of a large amount of overlapping information. Increased similarity is thought to lead to greater comparison errors between items encoded into WM and the test item(s) presented at retrieval. However, previous studies have used different object categories to manipulate complexity and similarity, raising questions as to whether these effects are simply due to cross-category differences. For the first time, here the relationship between complexity and similarity in WM using the same stimulus category (abstract polygons) are investigated. The authors used a delayed discrimination task to measure WM for 1-4 complex versus simple simultaneously presented items and manipulated the similarity between the single test item at retrieval and the sample items at encoding. WM was poorer for complex than simple items only when the test item was similar to 1 of the encoding items, and not when it was dissimilar or identical. The results provide clear support for reinterpretation of the complexity effect in WM as a similarity effect and highlight the importance of the retrieval stage in governing WM performance. The authors discuss how these findings can be reconciled with current models of WM capacity limits. (c) 2015 APA, all rights reserved).
Changing Zaire to Congo: the fate of no-longer relevant mnemonic information.
Eriksson, Johan; Stiernstedt, Mikael; Öhlund, Maria; Nyberg, Lars
2014-11-01
In an ever-changing world there is constant pressure on revising long-term memory, such when people or countries change name. What happens to the old, pre-existing information? One possibility is that old associations gradually are weakened and eventually lost. Alternatively, old and no longer relevant information may still be an integral part of memory traces. To test the hypothesis that old mnemonic information still becomes activated when people correctly retrieve new, currently relevant information, brain activity was measured with fMRI while participants performed a cued-retrieval task. Paired associates (symbol-sound and symbol-face pairs) were first learned during two days. Half of the associations were then updated during the next two days, followed by fMRI scanning on day 5 and also 18 months later. As expected, retrieval reactivated sensory cortex related to the most recently learned association (visual cortex for symbol-face pairs, auditory cortex for symbol-sound pairs). Critically, retrieval also reactivated sensory cortex related to the no-longer relevant associate. Eighteen months later, only non-updated symbol-face associations were intact. Intriguingly, a subset of the updated associations was now treated as though the original association had taken over, in that memory performance was significantly worse than chance and that activity in sensory cortex for the original but not the updated associate correlated (negatively) with performance. Moreover, the degree of "residual" reactivation during day 5 inversely predicted memory performance 18 months later. Thus, updating of long-term memory involves adding new information to already existing networks, in which old information can stay resilient for a long time. Copyright © 2014. Published by Elsevier Inc.
Visualization techniques for computer network defense
NASA Astrophysics Data System (ADS)
Beaver, Justin M.; Steed, Chad A.; Patton, Robert M.; Cui, Xiaohui; Schultz, Matthew
2011-06-01
Effective visual analysis of computer network defense (CND) information is challenging due to the volume and complexity of both the raw and analyzed network data. A typical CND is comprised of multiple niche intrusion detection tools, each of which performs network data analysis and produces a unique alerting output. The state-of-the-practice in the situational awareness of CND data is the prevalent use of custom-developed scripts by Information Technology (IT) professionals to retrieve, organize, and understand potential threat events. We propose a new visual analytics framework, called the Oak Ridge Cyber Analytics (ORCA) system, for CND data that allows an operator to interact with all detection tool outputs simultaneously. Aggregated alert events are presented in multiple coordinated views with timeline, cluster, and swarm model analysis displays. These displays are complemented with both supervised and semi-supervised machine learning classifiers. The intent of the visual analytics framework is to improve CND situational awareness, to enable an analyst to quickly navigate and analyze thousands of detected events, and to combine sophisticated data analysis techniques with interactive visualization such that patterns of anomalous activities may be more easily identified and investigated.
MPEG-7 audio-visual indexing test-bed for video retrieval
NASA Astrophysics Data System (ADS)
Gagnon, Langis; Foucher, Samuel; Gouaillier, Valerie; Brun, Christelle; Brousseau, Julie; Boulianne, Gilles; Osterrath, Frederic; Chapdelaine, Claude; Dutrisac, Julie; St-Onge, Francis; Champagne, Benoit; Lu, Xiaojian
2003-12-01
This paper reports on the development status of a Multimedia Asset Management (MAM) test-bed for content-based indexing and retrieval of audio-visual documents within the MPEG-7 standard. The project, called "MPEG-7 Audio-Visual Document Indexing System" (MADIS), specifically targets the indexing and retrieval of video shots and key frames from documentary film archives, based on audio-visual content like face recognition, motion activity, speech recognition and semantic clustering. The MPEG-7/XML encoding of the film database is done off-line. The description decomposition is based on a temporal decomposition into visual segments (shots), key frames and audio/speech sub-segments. The visible outcome will be a web site that allows video retrieval using a proprietary XQuery-based search engine and accessible to members at the Canadian National Film Board (NFB) Cineroute site. For example, end-user will be able to ask to point on movie shots in the database that have been produced in a specific year, that contain the face of a specific actor who tells a specific word and in which there is no motion activity. Video streaming is performed over the high bandwidth CA*net network deployed by CANARIE, a public Canadian Internet development organization.
Weafer, Jessica; Gallo, David A; de Wit, Harriet
2014-01-01
Stimulant drugs facilitate both encoding and retrieval of salient information in laboratory animals, but less is known about their effects on memory for emotionally salient visual images in humans. The current study investigated dextroamphetamine (AMP) effects on memory for emotional pictures in healthy humans, by administering the drug only at encoding, only at retrieval, or at both encoding and retrieval. During the encoding session, all participants viewed standardized positive, neutral, and negative pictures from the International Affective Picture System (IAPS). 48 hours later they attended a retrieval session testing their cued recollection of these stimuli. Participants were randomly assigned to one of four conditions (N=20 each): condition AP (20 mg AMP at encoding and placebo (PL) at retrieval); condition PA (PL at encoding and AMP at retrieval); condition AA (AMP at encoding and retrieval); or condition PP (PL at encoding and retrieval). Amphetamine produced its expected effects on physiological and subjective measures, and negative pictures were recollected more frequently than neutral pictures. However, contrary to hypotheses, AMP did not affect recollection for positive, negative, or neutral stimuli, whether it was administered at encoding, retrieval, or at both encoding and retrieval. Moreover, recollection accuracy was not state-dependent. Considered in light of other recent drug studies in humans, this study highlights the sensitivity of drug effects to memory testing conditions and suggests future strategies for translating preclinical findings to human behavioral laboratories.
Weafer, Jessica; Gallo, David A.; de Wit, Harriet
2014-01-01
Stimulant drugs facilitate both encoding and retrieval of salient information in laboratory animals, but less is known about their effects on memory for emotionally salient visual images in humans. The current study investigated dextroamphetamine (AMP) effects on memory for emotional pictures in healthy humans, by administering the drug only at encoding, only at retrieval, or at both encoding and retrieval. During the encoding session, all participants viewed standardized positive, neutral, and negative pictures from the International Affective Picture System (IAPS). 48 hours later they attended a retrieval session testing their cued recollection of these stimuli. Participants were randomly assigned to one of four conditions (N = 20 each): condition AP (20 mg AMP at encoding and placebo (PL) at retrieval); condition PA (PL at encoding and AMP at retrieval); condition AA (AMP at encoding and retrieval); or condition PP (PL at encoding and retrieval). Amphetamine produced its expected effects on physiological and subjective measures, and negative pictures were recollected more frequently than neutral pictures. However, contrary to hypotheses, AMP did not affect recollection for positive, negative, or neutral stimuli, whether it was administered at encoding, retrieval, or at both encoding and retrieval. Moreover, recollection accuracy was not state-dependent. Considered in light of other recent drug studies in humans, this study highlights the sensitivity of drug effects to memory testing conditions and suggests future strategies for translating preclinical findings to human behavioral laboratories. PMID:24587355
ERIC Educational Resources Information Center
Mondini, Sara; Luzzatti, Claudio; Zonca, Giusy; Pistarini, Caterina; Semenza, Carlo
2004-01-01
This study seeks information on the mental representation of Verb-Noun (VN) nominal compounds through neuropsychological methods. The lexical retrieval of compound nouns is tested in 30 aphasic patients using a visual confrontation naming task. The target names are VN compounds, Noun-Noun (NN) compounds, and long morphologically simple nouns…
Automated Airdrop Information Retrieval System-Human Fact ors Database (AAIRS-HFD) (Users Manual)
1994-09-01
creeps, or chokes) Pressure Change Disorders Loss of Sensorimotor Abilities Loss of Cognitive/Perceptual Abilities Treatment drug therapy ...physical therapy cognitive therapy biofeedback therapy 73 9. Psychological Factors Situational Awareness altitude awareness Visual/Spatial...on/off valve prebreather Floatation Devices life preserver Scuba Gear Ankle Braces Knee Braces/Pads 82 7. Cargo/Resupply Parachute Assembly
A selective deficit in imageable concepts: a window to the organization of the conceptual system
Gvion, Aviah; Friedmann, Naama
2013-01-01
Nissim, a 64 years old Hebrew-speaking man who sustained an ischemic infarct in the left occipital lobe, exhibited an intriguing pattern. He could hold a deep and fluent conversation about abstract and complex issues, such as the social risks in unemployment, but failed to retrieve imageable words such as ball, spoon, carrot, or giraffe. A detailed study of the words he could and could not retrieve, in tasks of picture naming, tactile naming, and naming to definition, indicated that whereas he was able to retrieve abstract words, he had severe difficulties when trying to retrieve imageable words. The same dissociation also applied for proper names—he could retrieve names of people who have no visual image attached to their representation (such as the son of the biblical Abraham), but could not name people who had a visual image (such as his own son, or Barack Obama). When he tried to produce imageable words, he mainly produced perseverations and empty speech, and some semantic paraphasias. He did not produce perseverations when he tried to retrieve abstract words. This suggests that perseverations may occur when the phonological production system produces a word without proper activation in the semantic lexicon. Nissim evinced a similar dissociation in comprehension—he could understand abstract words and sentences but failed to understand sentences with imageable words, and to match spoken imageable words to pictures or to semantically related imageable words. He was able to understand proverbs with imageable literal meaning but abstract figurative meaning. His comprehension was impaired also in tasks of semantic associations of pictures, pointing to a conceptual, rather than lexical source of the deficit. His visual perception as well as his phonological input and output lexicons and buffers (assessed by auditory lexical decision, word and sentence repetition, and writing to dictation) were intact, supporting a selective conceptual system impairment. He was able to retrieve gestures for objects and pictures he saw, indicating that his access to concepts often sufficed for the activation of the motoric information but did not suffice for access to the entry in the semantic lexicon. These results show that imageable concepts can be selectively impaired, and shed light on the organization of conceptual-semantic system. PMID:23785321
A selective deficit in imageable concepts: a window to the organization of the conceptual system.
Gvion, Aviah; Friedmann, Naama
2013-01-01
Nissim, a 64 years old Hebrew-speaking man who sustained an ischemic infarct in the left occipital lobe, exhibited an intriguing pattern. He could hold a deep and fluent conversation about abstract and complex issues, such as the social risks in unemployment, but failed to retrieve imageable words such as ball, spoon, carrot, or giraffe. A detailed study of the words he could and could not retrieve, in tasks of picture naming, tactile naming, and naming to definition, indicated that whereas he was able to retrieve abstract words, he had severe difficulties when trying to retrieve imageable words. The same dissociation also applied for proper names-he could retrieve names of people who have no visual image attached to their representation (such as the son of the biblical Abraham), but could not name people who had a visual image (such as his own son, or Barack Obama). When he tried to produce imageable words, he mainly produced perseverations and empty speech, and some semantic paraphasias. He did not produce perseverations when he tried to retrieve abstract words. This suggests that perseverations may occur when the phonological production system produces a word without proper activation in the semantic lexicon. Nissim evinced a similar dissociation in comprehension-he could understand abstract words and sentences but failed to understand sentences with imageable words, and to match spoken imageable words to pictures or to semantically related imageable words. He was able to understand proverbs with imageable literal meaning but abstract figurative meaning. His comprehension was impaired also in tasks of semantic associations of pictures, pointing to a conceptual, rather than lexical source of the deficit. His visual perception as well as his phonological input and output lexicons and buffers (assessed by auditory lexical decision, word and sentence repetition, and writing to dictation) were intact, supporting a selective conceptual system impairment. He was able to retrieve gestures for objects and pictures he saw, indicating that his access to concepts often sufficed for the activation of the motoric information but did not suffice for access to the entry in the semantic lexicon. These results show that imageable concepts can be selectively impaired, and shed light on the organization of conceptual-semantic system.
Visual Semantic Based 3D Video Retrieval System Using HDFS.
Kumar, C Ranjith; Suguna, S
2016-08-01
This paper brings out a neoteric frame of reference for visual semantic based 3d video search and retrieval applications. Newfangled 3D retrieval application spotlight on shape analysis like object matching, classification and retrieval not only sticking up entirely with video retrieval. In this ambit, we delve into 3D-CBVR (Content Based Video Retrieval) concept for the first time. For this purpose, we intent to hitch on BOVW and Mapreduce in 3D framework. Instead of conventional shape based local descriptors, we tried to coalesce shape, color and texture for feature extraction. For this purpose, we have used combination of geometric & topological features for shape and 3D co-occurrence matrix for color and texture. After thriving extraction of local descriptors, TB-PCT (Threshold Based- Predictive Clustering Tree) algorithm is used to generate visual codebook and histogram is produced. Further, matching is performed using soft weighting scheme with L 2 distance function. As a final step, retrieved results are ranked according to the Index value and acknowledged to the user as a feedback .In order to handle prodigious amount of data and Efficacious retrieval, we have incorporated HDFS in our Intellection. Using 3D video dataset, we future the performance of our proposed system which can pan out that the proposed work gives meticulous result and also reduce the time intricacy.
Wagner, Barry T; Jackson, Heather M
2006-02-01
This study examined the cognitive demands of 2 selection techniques in augmentative and alternative communication (AAC), direct selection, and visual linear scanning, by determining the memory retrieval abilities of typically developing children when presented with fixed communication displays. One hundred twenty typical children from kindergarten, 1st, and 3rd grades were randomly assigned to either a direct selection or visual linear scanning group. Memory retrieval was assessed through word span using Picture Communication Symbols (PCSs). Participants were presented various numbers and arrays of PCSs and asked to retrieve them by placing identical graphic symbols on fixed communication displays with grid layouts. The results revealed that participants were able to retrieve more PCSs during direct selection than scanning. Additionally, 3rd-grade children retrieved more PCSs than kindergarten and 1st-grade children. An analysis on the type of errors during retrieval indicated that children were more successful at retrieving the correct PCSs than the designated location of those symbols on fixed communication displays. AAC practitioners should consider using direct selection over scanning whenever possible and account for anticipatory monitoring and pulses when scanning is used in the service delivery of children with little or no functional speech. Also, researchers should continue to investigate AAC selection techniques in relationship to working memory resources.
Pienaar, Rudolph; Rannou, Nicolas; Bernal, Jorge; Hahn, Daniel; Grant, P Ellen
2015-01-01
The utility of web browsers for general purpose computing, long anticipated, is only now coming into fruition. In this paper we present a web-based medical image data and information management software platform called ChRIS ([Boston] Children's Research Integration System). ChRIS' deep functionality allows for easy retrieval of medical image data from resources typically found in hospitals, organizes and presents information in a modern feed-like interface, provides access to a growing library of plugins that process these data - typically on a connected High Performance Compute Cluster, allows for easy data sharing between users and instances of ChRIS and provides powerful 3D visualization and real time collaboration.
Retrieval and sleep both counteract the forgetting of spatial information.
Antony, James W; Paller, Ken A
2018-06-01
Repeatedly studying information is a good way to strengthen memory storage. Nevertheless, testing recall often produces superior long-term retention. Demonstrations of this testing effect, typically with verbal stimuli, have shown that repeated retrieval through testing reduces forgetting. Sleep also benefits memory storage, perhaps through repeated retrieval as well. That is, memories may generally be subject to forgetting that can be counteracted when memories become reactivated, and there are several types of reactivation: (i) via intentional restudying, (ii) via testing, (iii) without provocation during wake, or (iv) during sleep. We thus measured forgetting for spatial material subjected to repeated study or repeated testing followed by retention intervals with sleep versus wake. Four groups of subjects learned a set of visual object-location associations and either restudied the associations or recalled locations given the objects as cues. We found the advantage for restudied over retested information was greater in the PM than AM group. Additional groups tested at 5-min and 1-wk retention intervals confirmed previous findings of greater relative benefits for restudying in the short-term and for retesting in the long-term. Results overall support the conclusion that repeated reactivation through testing or sleeping stabilizes information against forgetting. © 2018 Antony and Paller; Published by Cold Spring Harbor Laboratory Press.
Visualizing and improving the robustness of phase retrieval algorithms
Tripathi, Ashish; Leyffer, Sven; Munson, Todd; ...
2015-06-01
Coherent x-ray diffractive imaging is a novel imaging technique that utilizes phase retrieval and nonlinear optimization methods to image matter at nanometer scales. We explore how the convergence properties of a popular phase retrieval algorithm, Fienup's HIO, behave by introducing a reduced dimensionality problem allowing us to visualize and quantify convergence to local minima and the globally optimal solution. We then introduce generalizations of HIO that improve upon the original algorithm's ability to converge to the globally optimal solution.
Visualizing and improving the robustness of phase retrieval algorithms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tripathi, Ashish; Leyffer, Sven; Munson, Todd
Coherent x-ray diffractive imaging is a novel imaging technique that utilizes phase retrieval and nonlinear optimization methods to image matter at nanometer scales. We explore how the convergence properties of a popular phase retrieval algorithm, Fienup's HIO, behave by introducing a reduced dimensionality problem allowing us to visualize and quantify convergence to local minima and the globally optimal solution. We then introduce generalizations of HIO that improve upon the original algorithm's ability to converge to the globally optimal solution.
Compressed domain indexing of losslessly compressed images
NASA Astrophysics Data System (ADS)
Schaefer, Gerald
2001-12-01
Image retrieval and image compression have been pursued separately in the past. Only little research has been done on a synthesis of the two by allowing image retrieval to be performed directly in the compressed domain of images without the need to uncompress them first. In this paper methods for image retrieval in the compressed domain of losslessly compressed images are introduced. While most image compression techniques are lossy, i.e. discard visually less significant information, lossless techniques are still required in fields like medical imaging or in situations where images must not be changed due to legal reasons. The algorithms in this paper are based on predictive coding methods where a pixel is encoded based on the pixel values of its (already encoded) neighborhood. The first method is based on an understanding that predictively coded data is itself indexable and represents a textural description of the image. The second method operates directly on the entropy encoded data by comparing codebooks of images. Experiments show good image retrieval results for both approaches.
WORDGRAPH: Keyword-in-Context Visualization for NETSPEAK's Wildcard Search.
Riehmann, Patrick; Gruendl, Henning; Potthast, Martin; Trenkmann, Martin; Stein, Benno; Froehlich, Benno
2012-09-01
The WORDGRAPH helps writers in visually choosing phrases while writing a text. It checks for the commonness of phrases and allows for the retrieval of alternatives by means of wildcard queries. To support such queries, we implement a scalable retrieval engine, which returns high-quality results within milliseconds using a probabilistic retrieval strategy. The results are displayed as WORDGRAPH visualization or as a textual list. The graphical interface provides an effective means for interactive exploration of search results using filter techniques, query expansion, and navigation. Our observations indicate that, of three investigated retrieval tasks, the textual interface is sufficient for the phrase verification task, wherein both interfaces support context-sensitive word choice, and the WORDGRAPH best supports the exploration of a phrase's context or the underlying corpus. Our user study confirms these observations and shows that WORDGRAPH is generally the preferred interface over the textual result list for queries containing multiple wildcards.
NASA Technical Reports Server (NTRS)
1977-01-01
Components of a videotape storage and retrieval system originally developed for NASA have been adapted as a tool for law enforcement agencies. Ampex Corp., Redwood City, Cal., built a unique system for NASA-Marshall. The first application of professional broadcast technology to computerized record-keeping, it incorporates new equipment for transporting tapes within the system. After completing the NASA system, Ampex continued development, primarily to improve image resolution. The resulting advanced system, known as the Ampex Videofile, offers advantages over microfilm for filing, storing, retrieving, and distributing large volumes of information. The system's computer stores information in digital code rather than in pictorial form. While microfilm allows visual storage of whole documents, it requires a step before usage--developing the film. With Videofile, the actual document is recorded, complete with photos and graphic material, and a picture of the document is available instantly.
A fuzzy measure approach to motion frame analysis for scene detection. M.S. Thesis - Houston Univ.
NASA Technical Reports Server (NTRS)
Leigh, Albert B.; Pal, Sankar K.
1992-01-01
This paper addresses a solution to the problem of scene estimation of motion video data in the fuzzy set theoretic framework. Using fuzzy image feature extractors, a new algorithm is developed to compute the change of information in each of two successive frames to classify scenes. This classification process of raw input visual data can be used to establish structure for correlation. The algorithm attempts to fulfill the need for nonlinear, frame-accurate access to video data for applications such as video editing and visual document archival/retrieval systems in multimedia environments.
Evaluating Combinations of Ranked Lists and Visualizations of Inter-Document Similarity.
ERIC Educational Resources Information Center
Allan, James; Leuski, Anton; Swan, Russell; Byrd, Donald
2001-01-01
Considers how ideas from document clustering can be used to improve retrieval accuracy of ranked lists in interactive systems and how to evaluate system effectiveness. Describes a TREC (Text Retrieval Conference) study that constructed and evaluated systems that present the user with ranked lists and a visualization of inter-document similarities.…
Masseroli, M; Bonacina, S; Pinciroli, F
2004-01-01
The actual development of distributed information technologies and Java programming enables employing them also in the medical arena to support the retrieval, integration and evaluation of heterogeneous data and multimodal images in a web browser environment. With this aim, we used them to implement a client-server architecture based on software agents. The client side is a Java applet running in a web browser and providing a friendly medical user interface to browse and visualize different patient and medical test data, integrating them properly. The server side manages secure connections and queries to heterogeneous remote databases and file systems containing patient personal and clinical data. Based on the Java Advanced Imaging API, processing and analysis tools were developed to support the evaluation of remotely retrieved bioimages through the quantification of their features in different regions of interest. The Java platform-independence allows the centralized management of the implemented prototype and its deployment to each site where an intranet or internet connection is available. Giving healthcare providers effective support for comprehensively browsing, visualizing and evaluating medical images and records located in different remote repositories, the developed prototype can represent an important aid in providing more efficient diagnoses and medical treatments.
Costa, Daniel G.; Collotta, Mario; Pau, Giovanni; Duran-Faundez, Cristian
2017-01-01
The advance of technologies in several areas has allowed the development of smart city applications, which can improve the way of life in modern cities. When employing visual sensors in that scenario, still images and video streams may be retrieved from monitored areas, potentially providing valuable data for many applications. Actually, visual sensor networks may need to be highly dynamic, reflecting the changing of parameters in smart cities. In this context, characteristics of visual sensors and conditions of the monitored environment, as well as the status of other concurrent monitoring systems, may affect how visual sensors collect, encode and transmit information. This paper proposes a fuzzy-based approach to dynamically configure the way visual sensors will operate concerning sensing, coding and transmission patterns, exploiting different types of reference parameters. This innovative approach can be considered as the basis for multi-systems smart city applications based on visual monitoring, potentially bringing significant results for this research field. PMID:28067777
Costa, Daniel G; Collotta, Mario; Pau, Giovanni; Duran-Faundez, Cristian
2017-01-05
The advance of technologies in several areas has allowed the development of smart city applications, which can improve the way of life in modern cities. When employing visual sensors in that scenario, still images and video streams may be retrieved from monitored areas, potentially providing valuable data for many applications. Actually, visual sensor networks may need to be highly dynamic, reflecting the changing of parameters in smart cities. In this context, characteristics of visual sensors and conditions of the monitored environment, as well as the status of other concurrent monitoring systems, may affect how visual sensors collect, encode and transmit information. This paper proposes a fuzzy-based approach to dynamically configure the way visual sensors will operate concerning sensing, coding and transmission patterns, exploiting different types of reference parameters. This innovative approach can be considered as the basis for multi-systems smart city applications based on visual monitoring, potentially bringing significant results for this research field.
Differential verbal, visual, and spatial working memory in written language production.
Raulerson, Bascom A; Donovan, Michael J; Whiteford, Alison P; Kellogg, Ronald T
2010-02-01
The contributions of verbal, visual, and spatial working memory to written language production were investigated. Participants composed definitions for nouns while concurrently performing a task which required updating, storing, and retrieving information coded either verbally, visually, or spatially. The present study extended past findings by showing the linguistic encoding of planned conceptual content makes its largest demand on verbal working memory for both low and high frequency nouns. Kellogg, Olive, and Piolat in 2007 found that concrete nouns place substantial demands on visual working memory when imaging the nouns' referents during planning, whereas abstract nouns make no demand. The current study further showed that this pattern was not an artifact of visual working memory being sensitive to manipulation of just any lexical property of the noun prompts. In contrast to past results, writing made a small but detectible demand on spatial working memory.
Visual Based Retrieval Systems and Web Mining--Introduction.
ERIC Educational Resources Information Center
Iyengar, S. S.
2001-01-01
Briefly discusses Web mining and image retrieval techniques, and then presents a summary of articles in this special issue. Articles focus on Web content mining, artificial neural networks as tools for image retrieval, content-based image retrieval systems, and personalizing the Web browsing experience using media agents. (AEF)
Rosen, Maya L; Stern, Chantal E; Michalka, Samantha W; Devaney, Kathryn J; Somers, David C
2015-08-12
Human parietal cortex plays a central role in encoding visuospatial information and multiple visual maps exist within the intraparietal sulcus (IPS), with each hemisphere symmetrically representing contralateral visual space. Two forms of hemispheric asymmetries have been identified in parietal cortex ventrolateral to visuotopic IPS. Key attentional processes are localized to right lateral parietal cortex in the temporoparietal junction and long-term memory (LTM) retrieval processes are localized to the left lateral parietal cortex in the angular gyrus. Here, using fMRI, we investigate how spatial representations of visuotopic IPS are influenced by stimulus-guided visuospatial attention and by LTM-guided visuospatial attention. We replicate prior findings that a hemispheric asymmetry emerges under stimulus-guided attention: in the right hemisphere (RH), visual maps IPS0, IPS1, and IPS2 code attentional targets across the visual field; in the left hemisphere (LH), IPS0-2 codes primarily contralateral targets. We report the novel finding that, under LTM-guided attention, both RH and LH IPS0-2 exhibit bilateral responses and hemispheric symmetry re-emerges. Therefore, we demonstrate that both hemispheres of IPS0-2 are independently capable of dynamically changing spatial coding properties as attentional task demands change. These findings have important implications for understanding visuospatial and memory-retrieval deficits in patients with parietal lobe damage. The human parietal lobe contains multiple maps of the external world that spatially guide perception, action, and cognition. Maps in each cerebral hemisphere code information from the opposite side of space, not from the same side, and the two hemispheres are symmetric. Paradoxically, damage to specific parietal regions that lack spatial maps can cause patients to ignore half of space (hemispatial neglect syndrome), but only for right (not left) hemisphere damage. Conversely, the left parietal cortex has been linked to retrieval of vivid memories regardless of space. Here, we investigate possible underlying mechanisms in healthy individuals. We demonstrate two forms of dynamic changes in parietal spatial representations: an asymmetric one for stimulus-guided attention and a symmetric one for long-term memory-guided attention. Copyright © 2015 the authors 0270-6474/15/3511358-06$15.00/0.
Disrupting frontal eye-field activity impairs memory recall.
Wantz, Andrea L; Martarelli, Corinna S; Cazzoli, Dario; Kalla, Roger; Müri, René; Mast, Fred W
2016-04-13
A large body of research demonstrated that participants preferably look back to the encoding location when retrieving visual information from memory. However, the role of this 'looking back to nothing' is still debated. The goal of the present study was to extend this line of research by examining whether an important area in the cortical representation of the oculomotor system, the frontal eye field (FEF), is involved in memory retrieval. To interfere with the activity of the FEF, we used inhibitory continuous theta burst stimulation (cTBS). Before stimulation was applied, participants encoded a complex scene and performed a short-term (immediately after encoding) or long-term (after 24 h) recall task, just after cTBS over the right FEF or sham stimulation. cTBS did not affect overall performance, but stimulation and statement type (object vs. location) interacted. cTBS over the right FEF tended to impair object recall sensitivity, whereas there was no effect on location recall sensitivity. These findings suggest that the FEF is involved in retrieving object information from scene memory, supporting the hypothesis that the oculomotor system contributes to memory recall.
Doppler Lidar Vector Retrievals and Atmospheric Data Visualization in Mixed/Augmented Reality
NASA Astrophysics Data System (ADS)
Cherukuru, Nihanth Wagmi
Environmental remote sensing has seen rapid growth in the recent years and Doppler wind lidars have gained popularity primarily due to their non-intrusive, high spatial and temporal measurement capabilities. While lidar applications early on, relied on the radial velocity measurements alone, most of the practical applications in wind farm control and short term wind prediction require knowledge of the vector wind field. Over the past couple of years, multiple works on lidars have explored three primary methods of retrieving wind vectors viz., using homogeneous windfield assumption, computationally extensive variational methods and the use of multiple Doppler lidars. Building on prior research, the current three-part study, first demonstrates the capabilities of single and dual Doppler lidar retrievals in capturing downslope windstorm-type flows occurring at Arizona's Barringer Meteor Crater as a part of the METCRAX II field experiment. Next, to address the need for a reliable and computationally efficient vector retrieval for adaptive wind farm control applications, a novel 2D vector retrieval based on a variational formulation was developed and applied on lidar scans from an offshore wind farm and validated with data from a cup and vane anemometer installed on a nearby research platform. Finally, a novel data visualization technique using Mixed Reality (MR)/ Augmented Reality (AR) technology is presented to visualize data from atmospheric sensors. MR is an environment in which the user's visual perception of the real world is enhanced with live, interactive, computer generated sensory input (in this case, data from atmospheric sensors like Doppler lidars). A methodology using modern game development platforms is presented and demonstrated with lidar retrieved wind fields. In the current study, the possibility of using this technology to visualize data from atmospheric sensors in mixed reality is explored and demonstrated with lidar retrieved wind fields as well as a few earth science datasets for education and outreach activities.
Representing Where along with What Information in a Model of a Cortical Patch
Roudi, Yasser; Treves, Alessandro
2008-01-01
Behaving in the real world requires flexibly combining and maintaining information about both continuous and discrete variables. In the visual domain, several lines of evidence show that neurons in some cortical networks can simultaneously represent information about the position and identity of objects, and maintain this combined representation when the object is no longer present. The underlying network mechanism for this combined representation is, however, unknown. In this paper, we approach this issue through a theoretical analysis of recurrent networks. We present a model of a cortical network that can retrieve information about the identity of objects from incomplete transient cues, while simultaneously representing their spatial position. Our results show that two factors are important in making this possible: A) a metric organisation of the recurrent connections, and B) a spatially localised change in the linear gain of neurons. Metric connectivity enables a localised retrieval of information about object identity, while gain modulation ensures localisation in the correct position. Importantly, we find that the amount of information that the network can retrieve and retain about identity is strongly affected by the amount of information it maintains about position. This balance can be controlled by global signals that change the neuronal gain. These results show that anatomical and physiological properties, which have long been known to characterise cortical networks, naturally endow them with the ability to maintain a conjunctive representation of the identity and location of objects. PMID:18369416
Division of attention as a function of the number of steps, visual shifts, and memory load
NASA Technical Reports Server (NTRS)
Chechile, R. A.; Butler, K.; Gutowski, W.; Palmer, E. A.
1986-01-01
The effects on divided attention of visual shifts and long-term memory retrieval during a monitoring task are considered. A concurrent vigilance task was standardized under all experimental conditions. The results show that subjects can perform nearly perfectly on all of the time-shared tasks if long-term memory retrieval is not required for monitoring. With the requirement of memory retrieval, however, there was a large decrease in accuracy for all of the time-shared activities. It was concluded that the attentional demand of longterm memory retrieval is appreciable (even for a well-learned motor sequence), and thus memory retrieval results in a sizable reduction in the capability of subjects to divide their attention. A selected bibliography on the divided attention literature is provided.
Cartographic symbol library considering symbol relations based on anti-aliasing graphic library
NASA Astrophysics Data System (ADS)
Mei, Yang; Li, Lin
2007-06-01
Cartographic visualization represents geographic information with a map form, which enables us retrieve useful geospatial information. In digital environment, cartographic symbol library is the base of cartographic visualization and is an essential component of Geographic Information System as well. Existing cartographic symbol libraries have two flaws. One is the display quality and the other one is relations adjusting. Statistic data presented in this paper indicate that the aliasing problem is a major factor on the symbol display quality on graphic display devices. So, effective graphic anti-aliasing methods based on a new anti-aliasing algorithm are presented and encapsulated in an anti-aliasing graphic library with the form of Component Object Model. Furthermore, cartographic visualization should represent feature relation in the way of correctly adjusting symbol relations besides displaying an individual feature. But current cartographic symbol libraries don't have this capability. This paper creates a cartographic symbol design model to implement symbol relations adjusting. Consequently the cartographic symbol library based on this design model can provide cartographic visualization with relations adjusting capability. The anti-aliasing graphic library and the cartographic symbol library are sampled and the results prove that the two libraries both have better efficiency and effect.
Forms of Memory for Representation of Visual Objects
1991-04-15
neuropsychological syndromes that involve disruption of perceptual representation systems should pay rich dividends for implicit memory research (Schacter et al...BLACKORDi. 1988b. Deficits in the implicit retention of new associations by alcoholic Korsakoff patients. Brain and Cognition 7: 145-156. COFER, C. C...MOREINES & N. BUTTERS. 1973. Retrieving information from Korsakoff patients: Effects of categorical cues and reference to the task. Cortex 9: 165
Method for the reduction of image content redundancy in large image databases
Tobin, Kenneth William; Karnowski, Thomas P.
2010-03-02
A method of increasing information content for content-based image retrieval (CBIR) systems includes the steps of providing a CBIR database, the database having an index for a plurality of stored digital images using a plurality of feature vectors, the feature vectors corresponding to distinct descriptive characteristics of the images. A visual similarity parameter value is calculated based on a degree of visual similarity between features vectors of an incoming image being considered for entry into the database and feature vectors associated with a most similar of the stored images. Based on said visual similarity parameter value it is determined whether to store or how long to store the feature vectors associated with the incoming image in the database.
Dissecting contributions of prefrontal cortex and fusiform face area to face working memory.
Druzgal, T Jason; D'Esposito, Mark
2003-08-15
Interactions between prefrontal cortex (PFC) and stimulus-specific visual cortical association areas are hypothesized to mediate visual working memory in behaving monkeys. To clarify the roles for homologous regions in humans, event-related fMRI was used to assess neural activity in PFC and fusiform face area (FFA) of subjects performing a delay-recognition task for faces. In both PFC and FFA, activity increased parametrically with memory load during encoding and maintenance of face stimuli, despite quantitative differences in the magnitude of activation. Moreover, timing differences in PFC and FFA activation during memory encoding and retrieval implied a context dependence in the flow of neural information. These results support existing neurophysiological models of visual working memory developed in the nonhuman primate.
Impact of auditory-visual bimodality on lexical retrieval in Alzheimer's disease patients.
Simoes Loureiro, Isabelle; Lefebvre, Laurent
2015-01-01
The aim of this study was to generalize the positive impact of auditory-visual bimodality on lexical retrieval in Alzheimer's disease (AD) patients. In practice, the naming skills of healthy elderly persons improve when additional sensory signals are included. The hypothesis of this study was that the same influence would be observable in AD patients. Sixty elderly patients separated into three groups (healthy subjects, stage 1 AD patients, and stage 2 AD patients) were tested with a battery of naming tasks comprising three different modalities: a visual modality, an auditory modality, and a visual and auditory modality (bimodality). Our results reveal the positive influence of bimodality on the accuracy with which bimodal items are named (when compared with unimodal items) and their latency (when compared with unimodal auditory items). These results suggest that multisensory enrichment can improve lexical retrieval in AD patients.
Toward the establishment of design guidelines for effective 3D perspective interfaces
NASA Astrophysics Data System (ADS)
Fitzhugh, Elisabeth; Dixon, Sharon; Aleva, Denise; Smith, Eric; Ghrayeb, Joseph; Douglas, Lisa
2009-05-01
The propagation of information operation technologies, with correspondingly vast amounts of complex network information to be conveyed, significantly impacts operator workload. Information management research is rife with efforts to develop schemes to aid operators to identify, review, organize, and retrieve the wealth of available data. Data may take on such distinct forms as intelligence libraries, logistics databases, operational environment models, or network topologies. Increased use of taxonomies and semantic technologies opens opportunities to employ network visualization as a display mechanism for diverse information aggregations. The broad applicability of network visualizations is still being tested, but in current usage, the complexity of densely populated abstract networks suggests the potential utility of 3D. Employment of 2.5D in network visualization, using classic perceptual cues, creates a 3D experience within a 2D medium. It is anticipated that use of 3D perspective (2.5D) will enhance user ability to visually inspect large, complex, multidimensional networks. Current research for 2.5D visualizations demonstrates that display attributes, including color, shape, size, lighting, atmospheric effects, and shadows, significantly impact operator experience. However, guidelines for utilization of attributes in display design are limited. This paper discusses pilot experimentation intended to identify potential problem areas arising from these cues and determine how best to optimize perceptual cue settings. Development of optimized design guidelines will ensure that future experiments, comparing network displays with other visualizations, are not confounded or impeded by suboptimal attribute characterization. Current experimentation is anticipated to support development of cost-effective, visually effective methods to implement 3D in military applications.
The cortical basis of true memory and false memory for motion.
Karanian, Jessica M; Slotnick, Scott D
2014-02-01
Behavioral evidence indicates that false memory, like true memory, can be rich in sensory detail. By contrast, there is fMRI evidence that true memory for visual information produces greater activity in earlier visual regions than false memory, which suggests true memory is associated with greater sensory detail. However, false memory in previous fMRI paradigms may have lacked sufficient sensory detail to recruit earlier visual processing regions. To investigate this possibility in the present fMRI study, we employed a paradigm that produced feature-specific false memory with a high degree of visual detail. During the encoding phase, moving or stationary abstract shapes were presented to the left or right of fixation. During the retrieval phase, shapes from encoding were presented at fixation and participants classified each item as previously "moving" or "stationary" within each visual field. Consistent with previous fMRI findings, true memory but not false memory for motion activated motion processing region MT+, while both true memory and false memory activated later cortical processing regions. In addition, false memory but not true memory for motion activated language processing regions. The present findings indicate that true memory activates earlier visual regions to a greater degree than false memory, even under conditions of detailed retrieval. Thus, the dissociation between previous behavioral findings and fMRI findings do not appear to be task dependent. Future work will be needed to assess whether the same pattern of true memory and false memory activity is observed for different sensory modalities. Copyright © 2013 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Bahrick, Lorraine E.; Hernandez-Reif, Maria; Pickens, Jeffrey N.
1997-01-01
Tested hypothesis from Bahrick and Pickens' infant attention model that retrieval cues increase memory accessibility and shift visual preferences toward greater novelty to resemble recent memories. Found that after retention intervals associated with remote or intermediate memory, previous familiarity preferences shifted to null or novelty…
Delayed Match Retrieval: A Novel Anticipation-Based Visual Working Memory Paradigm
ERIC Educational Resources Information Center
Kaldy, Zsuzsa; Guillory, Sylvia B.; Blaser, Erik
2016-01-01
We tested 8- and 10-month-old infants' visual working memory (VWM) for object-location bindings--"what is where"--with a novel paradigm, Delayed Match Retrieval, that measured infants' anticipatory gaze responses (using a Tobii T120 eye tracker). In an inversion of Delayed-Match-to-Sample tasks and with inspiration from the game…
NASA Astrophysics Data System (ADS)
Juniati, E.; Arrofiqoh, E. N.
2017-09-01
Information extraction from remote sensing data especially land cover can be obtained by digital classification. In practical some people are more comfortable using visual interpretation to retrieve land cover information. However, it is highly influenced by subjectivity and knowledge of interpreter, also takes time in the process. Digital classification can be done in several ways, depend on the defined mapping approach and assumptions on data distribution. The study compared several classifiers method for some data type at the same location. The data used Landsat 8 satellite imagery, SPOT 6 and Orthophotos. In practical, the data used to produce land cover map in 1:50,000 map scale for Landsat, 1:25,000 map scale for SPOT and 1:5,000 map scale for Orthophotos, but using visual interpretation to retrieve information. Maximum likelihood Classifiers (MLC) which use pixel-based and parameters approach applied to such data, and also Artificial Neural Network classifiers which use pixel-based and non-parameters approach applied too. Moreover, this study applied object-based classifiers to the data. The classification system implemented is land cover classification on Indonesia topographic map. The classification applied to data source, which is expected to recognize the pattern and to assess consistency of the land cover map produced by each data. Furthermore, the study analyse benefits and limitations the use of methods.
Remote Handled WIPP Canisters at Los Alamos National Laboratory Characterized for Retrieval
DOE Office of Scientific and Technical Information (OSTI.GOV)
Griffin, J.; Gonzales, W.
2007-07-01
The Los Alamos National Laboratory (LANL) is pursuing retrieval, transportation, and disposal of 16 remote handled transuranic waste canisters stored below ground in shafts since 1994. These canisters were retrievably stored in the shafts to await Nuclear Regulatory Commission certification of the Model Number RH-TRU 72B transportation cask and authorization of the Waste Isolation Pilot Plant (WIPP) to accept the canisters for disposal. Retrieval planning included radiological characterization and visual inspection of the canisters to confirm historical records, verify container integrity, determine proper personnel protection for the retrieval operations, provide radiological dose and exposure rate data for retrieval operations, andmore » to provide exterior radiological contamination data. The radiological characterization and visual inspection of the canisters was performed in May 2006. The effort required the development of remote techniques and equipment due to the potential for personnel exposure to radiological doses approaching 300 R/hr. Innovations included the use of two nested 1.5 meter (m) (5-feet [ft]) long concrete culvert pipes (1.1-m [42 inch (in.)] and 1.5-m [60-in] diameter, respectively) as radiological shielding and collapsible electrostatic dusting wands to collect radiological swipe samples from the annular space between the canister and shaft wall. Visual inspection indicated that the canisters are in good condition with little or no rust, the welded seams are intact, and ten of the canisters include hydrogen gas sampling equipment on the pintle that will have to be removed prior to retrieval. The visual inspection also provided six canister identification numbers that matched historical storage records. The exterior radiological data indicated alpha and beta contamination below LANL release criteria and radiological dose and exposure rates lower than expected based upon historical data and modeling of the canister contents. (authors)« less
Sterpenich, Virginie; Schmidt, Christina; Albouy, Geneviève; Matarazzo, Luca; Vanhaudenhuyse, Audrey; Boveroux, Pierre; Degueldre, Christian; Leclercq, Yves; Balteau, Evelyne; Collette, Fabienne; Luxen, André; Phillips, Christophe; Maquet, Pierre
2014-06-01
Memory reactivation appears to be a fundamental process in memory consolidation. In this study we tested the influence of memory reactivation during rapid eye movement (REM) sleep on memory performance and brain responses at retrieval in healthy human participants. Fifty-six healthy subjects (28 women and 28 men, age [mean ± standard deviation]: 21.6 ± 2.2 y) participated in this functional magnetic resonance imaging (fMRI) study. Auditory cues were associated with pictures of faces during their encoding. These memory cues delivered during REM sleep enhanced subsequent accurate recollections but also false recognitions. These results suggest that reactivated memories interacted with semantically related representations, and induced new creative associations, which subsequently reduced the distinction between new and previously encoded exemplars. Cues had no effect if presented during stage 2 sleep, or if they were not associated with faces during encoding. Functional magnetic resonance imaging revealed that following exposure to conditioned cues during REM sleep, responses to faces during retrieval were enhanced both in a visual area and in a cortical region of multisensory (auditory-visual) convergence. These results show that reactivating memories during REM sleep enhances cortical responses during retrieval, suggesting the integration of recent memories within cortical circuits, favoring the generalization and schematization of the information.
Storage and retrieval of digital images in dermatology.
Bittorf, A; Krejci-Papa, N C; Diepgen, T L
1995-11-01
Differential diagnosis in dermatology relies on the interpretation of visual information in the form of clinical and histopathological images. Up until now, reference images have had to be retrieved from textbooks and/or appropriate journals. To overcome inherent limitations of those storage media with respect to the number of images stored, display, and search parameters available, we designed a computer-based database of digitized dermatologic images. Images were taken from the photo archive of the Dermatological Clinic of the University of Erlangen. A database was designed using the Entity-Relationship approach. It was implemented on a PC-Windows platform using MS Access* and MS Visual Basic®. As WWW-server a Sparc 10 workstation was used with the CERN Hypertext-Transfer-Protocol-Daemon (httpd) 3.0 pre 6 software running. For compressed storage on a hard drive, a quality factor of 60 allowed on-screen differential diagnosis and corresponded to a compression factor of 1:35 for clinical images and 1:40 for histopathological images. Hierarchical keys of clinical or histopathological criteria permitted multi-criteria searches. A script using the Common Gateway Interface (CGI) enabled remote search and image retrieval via the World-Wide-Web (W3). A dermatologic image database, featurig clinical and histopathological images was constructed which allows for multi-parameter searches and world-wide remote access.
Computer systems and methods for visualizing data
Stolte, Chris; Hanrahan, Patrick
2010-07-13
A method for forming a visual plot using a hierarchical structure of a dataset. The dataset comprises a measure and a dimension. The dimension consists of a plurality of levels. The plurality of levels form a dimension hierarchy. The visual plot is constructed based on a specification. A first level from the plurality of levels is represented by a first component of the visual plot. A second level from the plurality of levels is represented by a second component of the visual plot. The dataset is queried to retrieve data in accordance with the specification. The data includes all or a portion of the dimension and all or a portion of the measure. The visual plot is populated with the retrieved data in accordance with the specification.
Computer systems and methods for visualizing data
Stolte, Chris; Hanrahan, Patrick
2013-01-29
A method for forming a visual plot using a hierarchical structure of a dataset. The dataset comprises a measure and a dimension. The dimension consists of a plurality of levels. The plurality of levels form a dimension hierarchy. The visual plot is constructed based on a specification. A first level from the plurality of levels is represented by a first component of the visual plot. A second level from the plurality of levels is represented by a second component of the visual plot. The dataset is queried to retrieve data in accordance with the specification. The data includes all or a portion of the dimension and all or a portion of the measure. The visual plot is populated with the retrieved data in accordance with the specification.
NASA Technical Reports Server (NTRS)
Ragusa, James M.; Orwig, Gary; Gilliam, Michael; Blacklock, David; Shaykhian, Ali
1994-01-01
Status is given of an applications investigation on the potential for using an expert system shell for classification and retrieval of high resolution, digital, color space shuttle closeout photography. This NASA funded activity has focused on the use of integrated information technologies to intelligently classify and retrieve still imagery from a large, electronically stored collection. A space shuttle processing problem is identified, a working prototype system is described, and commercial applications are identified. A conclusion reached is that the developed system has distinct advantages over the present manual system and cost efficiencies will result as the system is implemented. Further, commercial potential exists for this integrated technology.
Toward visual user interfaces supporting collaborative multimedia content management
NASA Astrophysics Data System (ADS)
Husein, Fathi; Leissler, Martin; Hemmje, Matthias
2000-12-01
Supporting collaborative multimedia content management activities, as e.g., image and video acquisition, exploration, and access dialogues between naive users and multi media information systems is a non-trivial task. Although a wide variety of experimental and prototypical multimedia storage technologies as well as corresponding indexing and retrieval engines are available, most of them lack appropriate support for collaborative end-user oriented user interface front ends. The development of advanced user adaptable interfaces is necessary for building collaborative multimedia information- space presentations based upon advanced tools for information browsing, searching, filtering, and brokering to be applied on potentially very large and highly dynamic multimedia collections with a large number of users and user groups. Therefore, the development of advanced and at the same time adaptable and collaborative computer graphical information presentation schemes that allow to easily apply adequate visual metaphors for defined target user stereotypes has to become a key focus within ongoing research activities trying to support collaborative information work with multimedia collections.
A new visual navigation system for exploring biomedical Open Educational Resource (OER) videos
Zhao, Baoquan; Xu, Songhua; Lin, Shujin; Luo, Xiaonan; Duan, Lian
2016-01-01
Objective Biomedical videos as open educational resources (OERs) are increasingly proliferating on the Internet. Unfortunately, seeking personally valuable content from among the vast corpus of quality yet diverse OER videos is nontrivial due to limitations of today’s keyword- and content-based video retrieval techniques. To address this need, this study introduces a novel visual navigation system that facilitates users’ information seeking from biomedical OER videos in mass quantity by interactively offering visual and textual navigational clues that are both semantically revealing and user-friendly. Materials and Methods The authors collected and processed around 25 000 YouTube videos, which collectively last for a total length of about 4000 h, in the broad field of biomedical sciences for our experiment. For each video, its semantic clues are first extracted automatically through computationally analyzing audio and visual signals, as well as text either accompanying or embedded in the video. These extracted clues are subsequently stored in a metadata database and indexed by a high-performance text search engine. During the online retrieval stage, the system renders video search results as dynamic web pages using a JavaScript library that allows users to interactively and intuitively explore video content both efficiently and effectively. Results The authors produced a prototype implementation of the proposed system, which is publicly accessible at https://patentq.njit.edu/oer. To examine the overall advantage of the proposed system for exploring biomedical OER videos, the authors further conducted a user study of a modest scale. The study results encouragingly demonstrate the functional effectiveness and user-friendliness of the new system for facilitating information seeking from and content exploration among massive biomedical OER videos. Conclusion Using the proposed tool, users can efficiently and effectively find videos of interest, precisely locate video segments delivering personally valuable information, as well as intuitively and conveniently preview essential content of a single or a collection of videos. PMID:26335986
Visualization and interaction tools for aerial photograph mosaics
NASA Astrophysics Data System (ADS)
Fernandes, João Pedro; Fonseca, Alexandra; Pereira, Luís; Faria, Adriano; Figueira, Helder; Henriques, Inês; Garção, Rita; Câmara, António
1997-05-01
This paper describes the development of a digital spatial library based on mosaics of digital orthophotos, called Interactive Portugal, that will enable users both to retrieve geospatial information existing in the Portuguese National System for Geographic Information World Wide Web server, and to develop local databases connected to the main system. A set of navigation, interaction, and visualization tools are proposed and discussed. They include sketching, dynamic sketching, and navigation capabilities over the digital orthophotos mosaics. Main applications of this digital spatial library are pointed out and discussed, namely for education, professional, and tourism markets. Future developments are considered. These developments are related to user reactions, technological advancements, and projects that also aim at delivering and exploring digital imagery on the World Wide Web. Future capabilities for site selection and change detection are also considered.
Visual content highlighting via automatic extraction of embedded captions on MPEG compressed video
NASA Astrophysics Data System (ADS)
Yeo, Boon-Lock; Liu, Bede
1996-03-01
Embedded captions in TV programs such as news broadcasts, documentaries and coverage of sports events provide important information on the underlying events. In digital video libraries, such captions represent a highly condensed form of key information on the contents of the video. In this paper we propose a scheme to automatically detect the presence of captions embedded in video frames. The proposed method operates on reduced image sequences which are efficiently reconstructed from compressed MPEG video and thus does not require full frame decompression. The detection, extraction and analysis of embedded captions help to capture the highlights of visual contents in video documents for better organization of video, to present succinctly the important messages embedded in the images, and to facilitate browsing, searching and retrieval of relevant clips.
Sajjad, Muhammad; Mehmood, Irfan; Baik, Sung Wook
2017-01-01
Medical image collections contain a wealth of information which can assist radiologists and medical experts in diagnosis and disease detection for making well-informed decisions. However, this objective can only be realized if efficient access is provided to semantically relevant cases from the ever-growing medical image repositories. In this paper, we present an efficient method for representing medical images by incorporating visual saliency and deep features obtained from a fine-tuned convolutional neural network (CNN) pre-trained on natural images. Saliency detector is employed to automatically identify regions of interest like tumors, fractures, and calcified spots in images prior to feature extraction. Neuronal activation features termed as neural codes from different CNN layers are comprehensively studied to identify most appropriate features for representing radiographs. This study revealed that neural codes from the last fully connected layer of the fine-tuned CNN are found to be the most suitable for representing medical images. The neural codes extracted from the entire image and salient part of the image are fused to obtain the saliency-injected neural codes (SiNC) descriptor which is used for indexing and retrieval. Finally, locality sensitive hashing techniques are applied on the SiNC descriptor to acquire short binary codes for allowing efficient retrieval in large scale image collections. Comprehensive experimental evaluations on the radiology images dataset reveal that the proposed framework achieves high retrieval accuracy and efficiency for scalable image retrieval applications and compares favorably with existing approaches. PMID:28771497
Visualization Techniques for Computer Network Defense
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beaver, Justin M; Steed, Chad A; Patton, Robert M
2011-01-01
Effective visual analysis of computer network defense (CND) information is challenging due to the volume and complexity of both the raw and analyzed network data. A typical CND is comprised of multiple niche intrusion detection tools, each of which performs network data analysis and produces a unique alerting output. The state-of-the-practice in the situational awareness of CND data is the prevalent use of custom-developed scripts by Information Technology (IT) professionals to retrieve, organize, and understand potential threat events. We propose a new visual analytics framework, called the Oak Ridge Cyber Analytics (ORCA) system, for CND data that allows an operatormore » to interact with all detection tool outputs simultaneously. Aggregated alert events are presented in multiple coordinated views with timeline, cluster, and swarm model analysis displays. These displays are complemented with both supervised and semi-supervised machine learning classifiers. The intent of the visual analytics framework is to improve CND situational awareness, to enable an analyst to quickly navigate and analyze thousands of detected events, and to combine sophisticated data analysis techniques with interactive visualization such that patterns of anomalous activities may be more easily identified and investigated.« less
Cross-Domain Shoe Retrieval with a Semantic Hierarchy of Attribute Classification Network.
Zhan, Huijing; Shi, Boxin; Kot, Alex C
2017-08-04
Cross-domain shoe image retrieval is a challenging problem, because the query photo from the street domain (daily life scenario) and the reference photo in the online domain (online shop images) have significant visual differences due to the viewpoint and scale variation, self-occlusion, and cluttered background. This paper proposes the Semantic Hierarchy Of attributE Convolutional Neural Network (SHOE-CNN) with a three-level feature representation for discriminative shoe feature expression and efficient retrieval. The SHOE-CNN with its newly designed loss function systematically merges semantic attributes of closer visual appearances to prevent shoe images with the obvious visual differences being confused with each other; the features extracted from image, region, and part levels effectively match the shoe images across different domains. We collect a large-scale shoe dataset composed of 14341 street domain and 12652 corresponding online domain images with fine-grained attributes to train our network and evaluate our system. The top-20 retrieval accuracy improves significantly over the solution with the pre-trained CNN features.
Description and evaluation of the CASA dual-Doppler system
NASA Astrophysics Data System (ADS)
Martinez, Matthew
2011-12-01
Long range weather surveillance radars are designed for observing weather events for hundreds of kilometers from the radar and operate over a large coverage domain independently of weather conditions. As a result a loss in spatial resolution and limited temporal sampling of the weather phenomenon occurs. Due to the curvature of the Earth, long-range weather radars tend to make the majority of their precipitation and wind observations in the middle to upper troposphere, resulting in missed features associates with severe weather occurring in the lowest three kilometers of the troposphere. The spacing of long-range weather radars in the United States limits the feasibility of using dual-Doppler wind retrievals that would provide valuable information on the kinematics of weather events to end-users and researchers. The National Science Foundation Center for Collaborative Adapting Sensing of the Atmosphere (CASA) aims to change the current weather sensing model by increasing coverage of the lowest three kilometers of the troposphere by using densely spaced networked short-range weather radars. CASA has deployed a network of these radars in south-western Oklahoma, known as Integrated Project 1 (IP1). The individual radars are adaptively steered by an automated system known as the Meteorological Command and Control (MCC). The geometry of the IP1 network is such that the coverage domains of the individual radars are overlapping. A dual-Doppler system has been developed for the IP1 network which takes advantage of the overlapping coverage domains. The system is comprised of two subsystems, scan optimization and wind field retrieval. The scan strategy subsystem uses the DCAS model and the number of dual-Doppler pairs in the IP1 network to minimizes the normalized standard deviation in the wind field retrieval. The scan strategy subsystem also minimizes the synchronization error between two radars. The retrieval itself is comprised of two steps, data resampling and the retrieval process. The resampling step map data collected in radar coordinates to a common Cartesian grid. The retrieval process uses the radial velocity measurements to estimate the northward, eastward, and vertical component of the wind. The error in the retrieval is related to the beam crossing angle. The best retrievals occur at beam crossing angles greater than 30 degrees. During operations statistics on the scan strategy and wind field retrievals are collected in real-time. For the scan strategy subsystem statistics on the beam crossing angels, maximum elevation angle, number of elevation angles, maximum observable height, and synchronization time between radars in a pair are collected by the MCC. These statistics are used to evaluate the performance of the scan strategy subsystem. Observations of a strong wind event occurring on April 2, 2010 are used to evaluate the decision process associated with the scan strategy optimization. For the retrieval subsystem, the normalized standard deviation for the wind field retrieval is used to evaluate the quality of the retrieval. Wind fields from an EF2 tornado observed on May 14, 2009 are used to evaluate the quality of the wind field retrievals in hazardous wind events. Two techniques for visualizing vector fields are available, streamlines and arrows. Each visualization technique is evaluated based on the task of visualizing small and large scale phenomenon. Applications of the wind field retrievals include the computation of the vorticity and divergence fields. Vorticity and divergence for an EF2 tornado observed on May 14, 2009 are evaluated against vorticity and divergence for other observed tornadoes.
Neural reactivation reveals mechanisms for updating memory
Kuhl, Brice A.; Bainbridge, Wilma A.; Chun, Marvin M.
2012-01-01
Our ability to remember new information is often compromised by competition from prior learning, leading to many instances of forgetting. One of the challenges in studying why these lapses occur and how they can be prevented is that it is methodologically difficult to ‘see’ competition between memories as it occurs. Here, we used multi-voxel pattern analysis of human fMRI data to measure the neural reactivation of both older (competing) and newer (target) memories during individual attempts to retrieve newer memories. Of central interest was (a) whether older memories were reactivated during retrieval of newer memories, (b) how reactivation of older memories related to retrieval performance, and (c) whether neural mechanisms engaged during the encoding of newer memories were predictive of neural competition experienced during retrieval. Our results indicate that older and newer visual memories were often simultaneously reactivated in ventral temporal cortex—even when target memories were successfully retrieved. Importantly, stronger reactivation of older memories was associated with less accurate retrieval of newer memories, slower mnemonic decisions, and increased activity in anterior cingulate cortex. Finally, greater activity in the inferior frontal gyrus during the encoding of newer memories (memory updating) predicted lower competition in ventral temporal cortex during subsequent retrieval. Together, these results provide novel insight into how older memories compete with newer memories and specify neural mechanisms that allow competition to be overcome and memories to be updated. PMID:22399768
Cruse, Damian; Wilding, Edward L
2011-06-01
In a pair of recent studies, frontally distributed event-related potential (ERP) indices of two distinct post-retrieval processes were identified. It has been proposed that one of these processes operates over any kinds of task relevant information in service of task demands, while the other operates selectively over recovered contextual (episodic) information. The experiment described here was designed to test this account, by requiring retrieval of different kinds of contextual information to that required in previous relevant studies. Participants heard words spoken in either a male or female voice at study and ERPs were acquired at test where all words were presented visually. Half of the test words had been spoken at study. Participants first made an old/new judgment, distinguishing via key press between studied and unstudied words. For words judged 'old', participants indicated the voice in which the word had been spoken at study, and their confidence (high/low) in the voice judgment. There was evidence for only one of the two frontal old/new effects that had been identified in the previous studies. One possibility is that the ERP effect in previous studies that was tied specifically to recollection reflects processes operating over only some kinds of contextual information. An alternative is that the index reflects processes that are engaged primarily when there are few contextual features that distinguish between studied stimuli. Copyright © 2011 Elsevier Ltd. All rights reserved.
When the “I” Looks at the “Me”: Autobiographical Memory, Visual Perspective, and the Self
Sutin, Angelina R.; Robins, Richard W.
2009-01-01
This article presents a theoretical model of the self processes involved in autobiographical memories and proposes competing hypotheses for the role of visual perspective in autobiographical memory retrieval. Autobiographical memories can be retrieved from either the 1st person perspective, in which individuals see the event through their own eyes, or from the 3rd person perspective, in which individuals see themselves and the event from the perspective of an external observer. A growing body of research suggests that the visual perspective from which a memory is retrieved has important implications for a person's thoughts, feelings, and goals, and is integrally related to a host of self-evaluative processes. We review the relevant research literature, present our theoretical model, and outline directions for future research. PMID:18848783
When the "I" looks at the "Me": autobiographical memory, visual perspective, and the self.
Sutin, Angelina R; Robins, Richard W
2008-12-01
This article presents a theoretical model of the self processes involved in autobiographical memories and proposes competing hypotheses for the role of visual perspective in autobiographical memory retrieval. Autobiographical memories can be retrieved from either the 1st person perspective, in which individuals see the event through their own eyes, or from the 3rd person perspective, in which individuals see themselves and the event from the perspective of an external observer. A growing body of research suggests that the visual perspective from which a memory is retrieved has important implications for a person's thoughts, feelings, and goals, and is integrally related to a host of self-evaluative processes. We review the relevant research literature, present our theoretical model, and outline directions for future research.
Large-Scale Partial-Duplicate Image Retrieval and Its Applications
2016-04-23
SECURITY CLASSIFICATION OF: The explosive growth of Internet Media (partial-duplicate/similar images, 3D objects, 3D models, etc.) sheds bright...light on many promising applications in forensics, surveillance, 3D animation, mobile visual search, and 3D model/object search. Compared with the...and stable spatial configuration. Compared with the general 2D objects, 3D models/objects consist of 3D data information (typically a list of
Evaluation of Domain-Specific Collaboration Interfaces for Team Command and Control Tasks
2012-05-01
Technologies 1.1.1. Virtual Whiteboard Cognitive theories relating the utilization, storage, and retrieval of verbal and spatial information, such as...AE Spatial emergent SE Auditory linguistic AL Spatial positional SP Facial figural FF Spatial quantitative SQ Facial motive FM Tactile figural...driven by the auditory linguistic (AL), short-term memory (STM), spatial attentive (SA), visual temporal (VT), and vocal process (V) subscales. 0
LD2SNPing: linkage disequilibrium plotter and RFLP enzyme mining for tag SNPs
Chang, Hsueh-Wei; Chuang, Li-Yeh; Chang, Yan-Jhu; Cheng, Yu-Huei; Hung, Yu-Chen; Chen, Hsiang-Chi; Yang, Cheng-Hong
2009-01-01
Background Linkage disequilibrium (LD) mapping is commonly used to evaluate markers for genome-wide association studies. Most types of LD software focus strictly on LD analysis and visualization, but lack supporting services for genotyping. Results We developed a freeware called LD2SNPing, which provides a complete package of mining tools for genotyping and LD analysis environments. The software provides SNP ID- and gene-centric online retrievals for SNP information and tag SNP selection from dbSNP/NCBI and HapMap, respectively. Restriction fragment length polymorphism (RFLP) enzyme information for SNP genotype is available to all SNP IDs and tag SNPs. Single and multiple SNP inputs are possible in order to perform LD analysis by online retrieval from HapMap and NCBI. An LD statistics section provides D, D', r2, δQ, ρ, and the P values of the Hardy-Weinberg Equilibrium for each SNP marker, and Chi-square and likelihood-ratio tests for the pair-wise association of two SNPs in LD calculation. Finally, 2D and 3D plots, as well as plain-text output of the results, can be selected. Conclusion LD2SNPing thus provides a novel visualization environment for multiple SNP input, which facilitates SNP association studies. The software, user manual, and tutorial are freely available at . PMID:19500380
Program Helps Generate And Manage Graphics
NASA Technical Reports Server (NTRS)
Truong, L. V.
1994-01-01
Living Color Frame Maker (LCFM) computer program generates computer-graphics frames. Graphical frames saved as text files, in readable and disclosed format, easily retrieved and manipulated by user programs for wide range of real-time visual information applications. LCFM implemented in frame-based expert system for visual aids in management of systems. Monitoring, diagnosis, and/or control, diagrams of circuits or systems brought to "life" by use of designated video colors and intensities to symbolize status of hardware components (via real-time feedback from sensors). Status of systems can be displayed. Written in C++ using Borland C++ 2.0 compiler for IBM PC-series computers and compatible computers running MS-DOS.
NASA Astrophysics Data System (ADS)
Kase, Sue E.; Vanni, Michelle; Knight, Joanne A.; Su, Yu; Yan, Xifeng
2016-05-01
Within operational environments decisions must be made quickly based on the information available. Identifying an appropriate knowledge base and accurately formulating a search query are critical tasks for decision-making effectiveness in dynamic situations. The spreading of graph data management tools to access large graph databases is a rapidly emerging research area of potential benefit to the intelligence community. A graph representation provides a natural way of modeling data in a wide variety of domains. Graph structures use nodes, edges, and properties to represent and store data. This research investigates the advantages of information search by graph query initiated by the analyst and interactively refined within the contextual dimensions of the answer space toward a solution. The paper introduces SLQ, a user-friendly graph querying system enabling the visual formulation of schemaless and structureless graph queries. SLQ is demonstrated with an intelligence analyst information search scenario focused on identifying individuals responsible for manufacturing a mosquito-hosted deadly virus. The scenario highlights the interactive construction of graph queries without prior training in complex query languages or graph databases, intuitive navigation through the problem space, and visualization of results in graphical format.
Retrieval evaluation and distance learning from perceived similarity between endomicroscopy videos.
André, Barbara; Vercauteren, Tom; Buchner, Anna M; Wallace, Michael B; Ayache, Nicholas
2011-01-01
Evaluating content-based retrieval (CBR) is challenging because it requires an adequate ground-truth. When the available groundtruth is limited to textual metadata such as pathological classes, retrieval results can only be evaluated indirectly, for example in terms of classification performance. In this study we first present a tool to generate perceived similarity ground-truth that enables direct evaluation of endomicroscopic video retrieval. This tool uses a four-points Likert scale and collects subjective pairwise similarities perceived by multiple expert observers. We then evaluate against the generated ground-truth a previously developed dense bag-of-visual-words method for endomicroscopic video retrieval. Confirming the results of previous indirect evaluation based on classification, our direct evaluation shows that this method significantly outperforms several other state-of-the-art CBR methods. In a second step, we propose to improve the CBR method by learning an adjusted similarity metric from the perceived similarity ground-truth. By minimizing a margin-based cost function that differentiates similar and dissimilar video pairs, we learn a weight vector applied to the visual word signatures of videos. Using cross-validation, we demonstrate that the learned similarity distance is significantly better correlated with the perceived similarity than the original visual-word-based distance.
Proactive Support of Internet Browsing when Searching for Relevant Health Information.
Rurik, Clas; Zowalla, Richard; Wiesner, Martin; Pfeifer, Daniel
2015-01-01
Many people use the Internet as one of the primary sources of health information. This is due to the high volume and easy access of freely available information regarding diseases, diagnoses and treatments. However, users may find it difficult to retrieve information which is easily understandable and does not require a deep medical background. In this paper, we present a new kind of Web browser add-on, in order to proactively support users when searching for relevant health information. Our add-on not only visualizes the understandability of displayed medical text but also provides further recommendations of Web pages which hold similar content but are potentially easier to comprehend.
Jabeen, Safia; Mehmood, Zahid; Mahmood, Toqeer; Saba, Tanzila; Rehman, Amjad; Mahmood, Muhammad Tariq
2018-01-01
For the last three decades, content-based image retrieval (CBIR) has been an active research area, representing a viable solution for retrieving similar images from an image repository. In this article, we propose a novel CBIR technique based on the visual words fusion of speeded-up robust features (SURF) and fast retina keypoint (FREAK) feature descriptors. SURF is a sparse descriptor whereas FREAK is a dense descriptor. Moreover, SURF is a scale and rotation-invariant descriptor that performs better in the case of repeatability, distinctiveness, and robustness. It is robust to noise, detection errors, geometric, and photometric deformations. It also performs better at low illumination within an image as compared to the FREAK descriptor. In contrast, FREAK is a retina-inspired speedy descriptor that performs better for classification-based problems as compared to the SURF descriptor. Experimental results show that the proposed technique based on the visual words fusion of SURF-FREAK descriptors combines the features of both descriptors and resolves the aforementioned issues. The qualitative and quantitative analysis performed on three image collections, namely Corel-1000, Corel-1500, and Caltech-256, shows that proposed technique based on visual words fusion significantly improved the performance of the CBIR as compared to the feature fusion of both descriptors and state-of-the-art image retrieval techniques. PMID:29694429
Jabeen, Safia; Mehmood, Zahid; Mahmood, Toqeer; Saba, Tanzila; Rehman, Amjad; Mahmood, Muhammad Tariq
2018-01-01
For the last three decades, content-based image retrieval (CBIR) has been an active research area, representing a viable solution for retrieving similar images from an image repository. In this article, we propose a novel CBIR technique based on the visual words fusion of speeded-up robust features (SURF) and fast retina keypoint (FREAK) feature descriptors. SURF is a sparse descriptor whereas FREAK is a dense descriptor. Moreover, SURF is a scale and rotation-invariant descriptor that performs better in the case of repeatability, distinctiveness, and robustness. It is robust to noise, detection errors, geometric, and photometric deformations. It also performs better at low illumination within an image as compared to the FREAK descriptor. In contrast, FREAK is a retina-inspired speedy descriptor that performs better for classification-based problems as compared to the SURF descriptor. Experimental results show that the proposed technique based on the visual words fusion of SURF-FREAK descriptors combines the features of both descriptors and resolves the aforementioned issues. The qualitative and quantitative analysis performed on three image collections, namely Corel-1000, Corel-1500, and Caltech-256, shows that proposed technique based on visual words fusion significantly improved the performance of the CBIR as compared to the feature fusion of both descriptors and state-of-the-art image retrieval techniques.
Oculomotor preparation as a rehearsal mechanism in spatial working memory.
Pearson, David G; Ball, Keira; Smith, Daniel T
2014-09-01
There is little consensus regarding the specific processes responsible for encoding, maintenance, and retrieval of information in visuo-spatial working memory (VSWM). One influential theory is that VSWM may involve activation of the eye-movement (oculomotor) system. In this study we experimentally prevented healthy participants from planning or executing saccadic eye-movements during the encoding, maintenance, and retrieval stages of visual and spatial working memory tasks. Participants experienced a significant reduction in spatial memory span only when oculomotor preparation was prevented during encoding or maintenance. In contrast there was no reduction when oculomotor preparation was prevented only during retrieval. These results show that (a) involvement of the oculomotor system is necessary for optimal maintenance of directly-indicated locations in spatial working memory and (b) oculomotor preparation is not necessary during retrieval from spatial working memory. We propose that this study is the first to unambiguously demonstrate that the oculomotor system contributes to the maintenance of spatial locations in working memory independently from the involvement of covert attention. Copyright © 2014 The Authors. Published by Elsevier B.V. All rights reserved.
Audio-guided audiovisual data segmentation, indexing, and retrieval
NASA Astrophysics Data System (ADS)
Zhang, Tong; Kuo, C.-C. Jay
1998-12-01
While current approaches for video segmentation and indexing are mostly focused on visual information, audio signals may actually play a primary role in video content parsing. In this paper, we present an approach for automatic segmentation, indexing, and retrieval of audiovisual data, based on audio content analysis. The accompanying audio signal of audiovisual data is first segmented and classified into basic types, i.e., speech, music, environmental sound, and silence. This coarse-level segmentation and indexing step is based upon morphological and statistical analysis of several short-term features of the audio signals. Then, environmental sounds are classified into finer classes, such as applause, explosions, bird sounds, etc. This fine-level classification and indexing step is based upon time- frequency analysis of audio signals and the use of the hidden Markov model as the classifier. On top of this archiving scheme, an audiovisual data retrieval system is proposed. Experimental results show that the proposed approach has an accuracy rate higher than 90 percent for the coarse-level classification, and higher than 85 percent for the fine-level classification. Examples of audiovisual data segmentation and retrieval are also provided.
Refreshing memory traces: thinking of an item improves retrieval from visual working memory.
Souza, Alessandra S; Rerko, Laura; Oberauer, Klaus
2015-03-01
This article provides evidence that refreshing, a hypothetical attention-based process operating in working memory (WM), improves the accessibility of visual representations for recall. "Thinking of", one of several concurrently active representations, is assumed to refresh its trace in WM, protecting the representation from being forgotten. The link between refreshing and WM performance, however, has only been tenuously supported by empirical evidence. Here, we controlled which and how often individual items were refreshed in a color reconstruction task by presenting cues prompting participants to think of specific WM items during the retention interval. We show that the frequency with which an item is refreshed improves recall of this item from visual WM. Our study establishes a role of refreshing in recall from visual WM and provides a new method for studying the impact of refreshing on the amount of information we can keep accessible for ongoing cognition. © 2014 New York Academy of Sciences.
Generic decoding of seen and imagined objects using hierarchical visual features.
Horikawa, Tomoyasu; Kamitani, Yukiyasu
2017-05-22
Object recognition is a key function in both human and machine vision. While brain decoding of seen and imagined objects has been achieved, the prediction is limited to training examples. We present a decoding approach for arbitrary objects using the machine vision principle that an object category is represented by a set of features rendered invariant through hierarchical processing. We show that visual features, including those derived from a deep convolutional neural network, can be predicted from fMRI patterns, and that greater accuracy is achieved for low-/high-level features with lower-/higher-level visual areas, respectively. Predicted features are used to identify seen/imagined object categories (extending beyond decoder training) from a set of computed features for numerous object images. Furthermore, decoding of imagined objects reveals progressive recruitment of higher-to-lower visual representations. Our results demonstrate a homology between human and machine vision and its utility for brain-based information retrieval.
MPEG-7 based video annotation and browsing
NASA Astrophysics Data System (ADS)
Hoeynck, Michael; Auweiler, Thorsten; Wellhausen, Jens
2003-11-01
The huge amount of multimedia data produced worldwide requires annotation in order to enable universal content access and to provide content-based search-and-retrieval functionalities. Since manual video annotation can be time consuming, automatic annotation systems are required. We review recent approaches to content-based indexing and annotation of videos for different kind of sports and describe our approach to automatic annotation of equestrian sports videos. We especially concentrate on MPEG-7 based feature extraction and content description, where we apply different visual descriptors for cut detection. Further, we extract the temporal positions of single obstacles on the course by analyzing MPEG-7 edge information. Having determined single shot positions as well as the visual highlights, the information is jointly stored with meta-textual information in an MPEG-7 description scheme. Based on this information, we generate content summaries which can be utilized in a user-interface in order to provide content-based access to the video stream, but further for media browsing on a streaming server.
Jackson, Margaret C.; Linden, David E. J.; Raymond, Jane E.
2012-01-01
We are often required to filter out distraction in order to focus on a primary task during which working memory (WM) is engaged. Previous research has shown that negative versus neutral distracters presented during a visual WM maintenance period significantly impair memory for neutral information. However, the contents of WM are often also emotional in nature. The question we address here is how incidental information might impact upon visual WM when both this and the memory items contain emotional information. We presented emotional versus neutral words during the maintenance interval of an emotional visual WM faces task. Participants encoded two angry or happy faces into WM, and several seconds into a 9 s maintenance period a negative, positive, or neutral word was flashed on the screen three times. A single neutral test face was presented for retrieval with a face identity that was either present or absent in the preceding study array. WM for angry face identities was significantly better when an emotional (negative or positive) versus neutral (or no) word was presented. In contrast, WM for happy face identities was not significantly affected by word valence. These findings suggest that the presence of emotion within an intervening stimulus boosts the emotional value of threat-related information maintained in visual WM and thus improves performance. In addition, we show that incidental events that are emotional in nature do not always distract from an ongoing WM task. PMID:23112782
Kurtz, Camille; Beaulieu, Christopher F.; Napel, Sandy; Rubin, Daniel L.
2014-01-01
Computer-assisted image retrieval applications could assist radiologist interpretations by identifying similar images in large archives as a means to providing decision support. However, the semantic gap between low-level image features and their high level semantics may impair the system performances. Indeed, it can be challenging to comprehensively characterize the images using low-level imaging features to fully capture the visual appearance of diseases on images, and recently the use of semantic terms has been advocated to provide semantic descriptions of the visual contents of images. However, most of the existing image retrieval strategies do not consider the intrinsic properties of these terms during the comparison of the images beyond treating them as simple binary (presence/absence) features. We propose a new framework that includes semantic features in images and that enables retrieval of similar images in large databases based on their semantic relations. It is based on two main steps: (1) annotation of the images with semantic terms extracted from an ontology, and (2) evaluation of the similarity of image pairs by computing the similarity between the terms using the Hierarchical Semantic-Based Distance (HSBD) coupled to an ontological measure. The combination of these two steps provides a means of capturing the semantic correlations among the terms used to characterize the images that can be considered as a potential solution to deal with the semantic gap problem. We validate this approach in the context of the retrieval and the classification of 2D regions of interest (ROIs) extracted from computed tomographic (CT) images of the liver. Under this framework, retrieval accuracy of more than 0.96 was obtained on a 30-images dataset using the Normalized Discounted Cumulative Gain (NDCG) index that is a standard technique used to measure the effectiveness of information retrieval algorithms when a separate reference standard is available. Classification results of more than 95% were obtained on a 77-images dataset. For comparison purpose, the use of the Earth Mover's Distance (EMD), which is an alternative distance metric that considers all the existing relations among the terms, led to results retrieval accuracy of 0.95 and classification results of 93% with a higher computational cost. The results provided by the presented framework are competitive with the state-of-the-art and emphasize the usefulness of the proposed methodology for radiology image retrieval and classification. PMID:24632078
Combined Ozone Retrieval From METOP Sensors Using META-Training Of Deep Neural Networks
NASA Astrophysics Data System (ADS)
Felder, Martin; Sehnke, Frank; Kaifel, Anton
2013-12-01
The newest installment of our well-proven Neural Net- work Ozone Retrieval System (NNORSY) combines the METOP sensors GOME-2 and IASI with cloud information from AVHRR. Through the use of advanced meta- learning techniques like automatic feature selection and automatic architecture search applied to a set of deep neural networks, having at least two or three hidden layers, we have been able to avoid many technical issues normally encountered during the construction of such a joint retrieval system. This has been made possible by harnessing the processing power of modern consumer graphics cards with high performance graphic processors (GPU), which decreases training times by about two orders of magnitude. The system was trained on data from 2009 and 2010, including target ozone profiles from ozone sondes, ACE- FTS and MLS-AURA. To make maximum use of tropospheric information in the spectra, the data were partitioned into several sets of different cloud fraction ranges with the GOME-2 FOV, on which specialized retrieval networks are being trained. For the final ozone retrieval processing the different specialized networks are combined. The resulting retrieval system is very stable and does not show any systematic dependence on solar zenith angle, scan angle or sensor degradation. We present several sensitivity studies with regard to cloud fraction and target sensor type, as well as the performance in several latitude bands and with respect to independent validation stations. A visual cross-comparison against high-resolution ozone profiles from the KNMI EUMETSAT Ozone SAF product has also been performed and shows some distinctive features which we will briefly discuss. Overall, we demonstrate that a complex retrieval system can now be constructed with a minimum of ma- chine learning knowledge, using automated algorithms for many design decisions previously requiring expert knowledge. Provided sufficient training data and computation power of GPUs is available, the method can be applied to almost any kind of retrieval or, more generally, regression problem.
NASA Astrophysics Data System (ADS)
Likova, Lora T.
2015-03-01
This study is based on the recent discovery of massive and well-structured cross-modal memory activation generated in the primary visual cortex (V1) of totally blind people as a result of novel training in drawing without any vision (Likova, 2012). This unexpected functional reorganization of primary visual cortex was obtained after undergoing only a week of training by the novel Cognitive-Kinesthetic Method, and was consistent across pilot groups of different categories of visual deprivation: congenitally blind, late-onset blind and blindfolded (Likova, 2014). These findings led us to implicate V1 as the implementation of the theoretical visuo-spatial 'sketchpad' for working memory in the human brain. Since neither the source nor the subsequent 'recipient' of this non-visual memory information in V1 is known, these results raise a number of important questions about the underlying functional organization of the respective encoding and retrieval networks in the brain. To address these questions, an individual totally blind from birth was given a week of Cognitive-Kinesthetic training, accompanied by functional magnetic resonance imaging (fMRI) both before and just after training, and again after a two-month consolidation period. The results revealed a remarkable temporal sequence of training-based response reorganization in both the hippocampal complex and the temporal-lobe object processing hierarchy over the prolonged consolidation period. In particular, a pattern of profound learning-based transformations in the hippocampus was strongly reflected in V1, with the retrieval function showing massive growth as result of the Cognitive-Kinesthetic memory training and consolidation, while the initially strong hippocampal response during tactile exploration and encoding became non-existent. Furthermore, after training, an alternating patch structure in the form of a cascade of discrete ventral regions underwent radical transformations to reach complete functional specialization in terms of either encoding or retrieval as a function of the stage of learning. Moreover, several distinct patterns of learning-evolution within the patches as a function of their anatomical location, implying a complex reorganization of the object processing sub-networks through the learning period. These first findings of complex patterns of training-based encoding/retrieval reorganization thus have broad implications for a newly emerging view of the perception/memory interactions and their reorganization through the learning process. Note that the temporal evolution of these forms of extended functional reorganization could not be uncovered with conventional assessment paradigms used in the traditional approaches to functional mapping, which may therefore have to be revisited. Moreover, as the present results are obtained in learning under life-long blindness, they imply modality-independent operations, transcending the usual tight association with visual processing. The present approach of memory drawing training in blindness, has the dual-advantage of being both non-visual and causal intervention, which makes it a promising 'scalpel' to disentangle interactions among diverse cognitive functions.
Biasing spatial attention with semantic information: an event coding approach.
Amer, Tarek; Gozli, Davood G; Pratt, Jay
2017-04-21
We investigated the influence of conceptual processing on visual attention from the standpoint of Theory of Event Coding (TEC). The theory makes two predictions: first, an important factor in determining the influence of event 1 on processing event 2 is whether features of event 1 are bound into a unified representation (i.e., selection or retrieval of event 1). Second, whether processing the two events facilitates or interferes with each other should depend on the extent to which their constituent features overlap. In two experiments, participants performed a visual-attention cueing task, in which the visual target (event 2) was preceded by a relevant or irrelevant explicit (e.g., "UP") or implicit (e.g., "HAPPY") spatial-conceptual cue (event 1). Consistent with TEC, we found relevant explicit cues (which featurally overlap to a greater extent with the target) and implicit cues (which featurally overlap to a lesser extent), respectively, facilitated and interfered with target processing at compatible locations. Irrelevant explicit and implicit cues, on the other hand, both facilitated target processing, presumably because they were less likely selected or retrieved as an integrated and unified event file. We argue that such effects, often described as "attentional cueing", are better accounted for within the event coding framework.
Disappearance of the inversion effect during memory-guided tracking of scrambled biological motion.
Jiang, Changhao; Yue, Guang H; Chen, Tingting; Ding, Jinhong
2016-08-01
The human visual system is highly sensitive to biological motion. Even when a point-light walker is temporarily occluded from view by other objects, our eyes are still able to maintain tracking continuity. To investigate how the visual system establishes a correspondence between the biological-motion stimuli visible before and after the disruption, we used the occlusion paradigm with biological-motion stimuli that were intact or scrambled. The results showed that during visually guided tracking, both the observers' predicted times and predictive smooth pursuit were more accurate for upright biological motion (intact and scrambled) than for inverted biological motion. During memory-guided tracking, however, the processing advantage for upright as compared with inverted biological motion was not found in the scrambled condition, but in the intact condition only. This suggests that spatial location information alone is not sufficient to build and maintain the representational continuity of the biological motion across the occlusion, and that the object identity may act as an important information source in visual tracking. The inversion effect disappeared when the scrambled biological motion was occluded, which indicates that when biological motion is temporarily occluded and there is a complete absence of visual feedback signals, an oculomotor prediction is executed to maintain the tracking continuity, which is established not only by updating the target's spatial location, but also by the retrieval of identity information stored in long-term memory.
On the role of spatial phase and phase correlation in vision, illusion, and cognition
Gladilin, Evgeny; Eils, Roland
2015-01-01
Numerous findings indicate that spatial phase bears an important cognitive information. Distortion of phase affects topology of edge structures and makes images unrecognizable. In turn, appropriately phase-structured patterns give rise to various illusions of virtual image content and apparent motion. Despite a large body of phenomenological evidence not much is known yet about the role of phase information in neural mechanisms of visual perception and cognition. Here, we are concerned with analysis of the role of spatial phase in computational and biological vision, emergence of visual illusions and pattern recognition. We hypothesize that fundamental importance of phase information for invariant retrieval of structural image features and motion detection promoted development of phase-based mechanisms of neural image processing in course of evolution of biological vision. Using an extension of Fourier phase correlation technique, we show that the core functions of visual system such as motion detection and pattern recognition can be facilitated by the same basic mechanism. Our analysis suggests that emergence of visual illusions can be attributed to presence of coherently phase-shifted repetitive patterns as well as the effects of acuity compensation by saccadic eye movements. We speculate that biological vision relies on perceptual mechanisms effectively similar to phase correlation, and predict neural features of visual pattern (dis)similarity that can be used for experimental validation of our hypothesis of “cognition by phase correlation.” PMID:25954190
On the role of spatial phase and phase correlation in vision, illusion, and cognition.
Gladilin, Evgeny; Eils, Roland
2015-01-01
Numerous findings indicate that spatial phase bears an important cognitive information. Distortion of phase affects topology of edge structures and makes images unrecognizable. In turn, appropriately phase-structured patterns give rise to various illusions of virtual image content and apparent motion. Despite a large body of phenomenological evidence not much is known yet about the role of phase information in neural mechanisms of visual perception and cognition. Here, we are concerned with analysis of the role of spatial phase in computational and biological vision, emergence of visual illusions and pattern recognition. We hypothesize that fundamental importance of phase information for invariant retrieval of structural image features and motion detection promoted development of phase-based mechanisms of neural image processing in course of evolution of biological vision. Using an extension of Fourier phase correlation technique, we show that the core functions of visual system such as motion detection and pattern recognition can be facilitated by the same basic mechanism. Our analysis suggests that emergence of visual illusions can be attributed to presence of coherently phase-shifted repetitive patterns as well as the effects of acuity compensation by saccadic eye movements. We speculate that biological vision relies on perceptual mechanisms effectively similar to phase correlation, and predict neural features of visual pattern (dis)similarity that can be used for experimental validation of our hypothesis of "cognition by phase correlation."
Public health nurse perceptions of Omaha System data visualization.
Lee, Seonah; Kim, Era; Monsen, Karen A
2015-10-01
Electronic health records (EHRs) provide many benefits related to the storage, deployment, and retrieval of large amounts of patient data. However, EHRs have not fully met the need to reuse data for decision making on follow-up care plans. Visualization offers new ways to present health data, especially in EHRs. Well-designed data visualization allows clinicians to communicate information efficiently and effectively, contributing to improved interpretation of clinical data and better patient care monitoring and decision making. Public health nurse (PHN) perceptions of Omaha System data visualization prototypes for use in EHRs have not been evaluated. To visualize PHN-generated Omaha System data and assess PHN perceptions regarding the visual validity, helpfulness, usefulness, and importance of the visualizations, including interactive functionality. Time-oriented visualization for problems and outcomes and Matrix visualization for problems and interventions were developed using PHN-generated Omaha System data to help PHNs consume data and plan care at the point of care. Eleven PHNs evaluated prototype visualizations. Overall PHNs response to visualizations was positive, and feedback for improvement was provided. This study demonstrated the potential for using visualization techniques within EHRs to summarize Omaha System patient data for clinicians. Further research is needed to improve and refine these visualizations and assess the potential to incorporate visualizations within clinical EHRs. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
VidCat: an image and video analysis service for personal media management
NASA Astrophysics Data System (ADS)
Begeja, Lee; Zavesky, Eric; Liu, Zhu; Gibbon, David; Gopalan, Raghuraman; Shahraray, Behzad
2013-03-01
Cloud-based storage and consumption of personal photos and videos provides increased accessibility, functionality, and satisfaction for mobile users. One cloud service frontier that is recently growing is that of personal media management. This work presents a system called VidCat that assists users in the tagging, organization, and retrieval of their personal media by faces and visual content similarity, time, and date information. Evaluations for the effectiveness of the copy detection and face recognition algorithms on standard datasets are also discussed. Finally, the system includes a set of application programming interfaces (API's) allowing content to be uploaded, analyzed, and retrieved on any client with simple HTTP-based methods as demonstrated with a prototype developed on the iOS and Android mobile platforms.
The aftermath of memory retrieval for recycling visual working memory representations.
Park, Hyung-Bum; Zhang, Weiwei; Hyun, Joo-Seok
2017-07-01
We examined the aftermath of accessing and retrieving a subset of information stored in visual working memory (VWM)-namely, whether detection of a mismatch between memory and perception can impair the original memory of an item while triggering recognition-induced forgetting for the remaining, untested items. For this purpose, we devised a consecutive-change detection task wherein two successive testing probes were displayed after a single set of memory items. Across two experiments utilizing different memory-testing methods (whole vs. single probe), we observed a reliable pattern of poor performance in change detection for the second test when the first test had exhibited a color change. The impairment after a color change was evident even when the same memory item was repeatedly probed; this suggests that an attention-driven, salient visual change made it difficult to reinstate the previously remembered item. The second change detection, for memory items untested during the first change detection, was also found to be inaccurate, indicating that recognition-induced forgetting had occurred for the unprobed items in VWM. In a third experiment, we conducted a task that involved change detection plus continuous recall, wherein a memory recall task was presented after the change detection task. The analyses of the distributions of recall errors with a probabilistic mixture model revealed that the memory impairments from both visual changes and recognition-induced forgetting are explained better by the stochastic loss of memory items than by their degraded resolution. These results indicate that attention-driven visual change and recognition-induced forgetting jointly influence the "recycling" of VWM representations.
Dickson, Danielle S; Federmeier, Kara D
2018-05-22
Arabic numerals have come to be used for many purposes beyond representing a particular quantity (e.g., as a label for an athlete on their jersey), but it remains to be determined how this type of meaningfulness is accessed and utilized by readers. Motivated by previous work showing that item-level ratings of personal familiarity can influence traditional indices of memory retrieval, we recorded ERPs while participants read double-digit Arabic numerals (e.g., "65"), presented in a list, and rated whether or not each was familiar/personally meaningful. All numbers repeated after a few intervening trials. The effect of number repetition on the N400 was not impacted by subjective judgments of familiarity, suggesting that all numbers (personally meaningful or not) make initial contact with semantics, facilitating semantic access on second exposure. However, consistent with findings from prior studies of memory for letter strings and visual patterns, there was a late positivity (LPC) on second presentation, selective to numbers rated as familiar. This is the first electrophysiological evidence that readers can use Arabic numerals to guide explicit retrieval of non-numerical information. Copyright © 2018 Elsevier Ltd. All rights reserved.
How can knowledge discovery methods uncover spatio-temporal patterns in environmental data?
NASA Astrophysics Data System (ADS)
Wachowicz, Monica
2000-04-01
This paper proposes the integration of KDD, GVis and STDB as a long-term strategy, which will allow users to apply knowledge discovery methods for uncovering spatio-temporal patterns in environmental data. The main goal is to combine innovative techniques and associated tools for exploring very large environmental data sets in order to arrive at valid, novel, potentially useful, and ultimately understandable spatio-temporal patterns. The GeoInsight approach is described using the principles and key developments in the research domains of KDD, GVis, and STDB. The GeoInsight approach aims at the integration of these research domains in order to provide tools for performing information retrieval, exploration, analysis, and visualization. The result is a knowledge-based design, which involves visual thinking (perceptual-cognitive process) and automated information processing (computer-analytical process).
Using open-source programs to create a web-based portal for hydrologic information
NASA Astrophysics Data System (ADS)
Kim, H.
2013-12-01
Some hydrologic data sets, such as basin climatology, precipitation, and terrestrial water storage, are not easily obtainable and distributable due to their size and complexity. We present a Hydrologic Information Portal (HIP) that has been implemented at the University of California for Hydrologic Modeling (UCCHM) and that has been organized around the large river basins of North America. This portal can be easily accessed through a modern web browser that enables easy access and visualization of such hydrologic data sets. Some of the main features of our HIP include a set of data visualization features so that users can search, retrieve, analyze, integrate, organize, and map data within large river basins. Recent information technologies such as Google Maps, Tornado (Python asynchronous web server), NumPy/SciPy (Scientific Library for Python) and d3.js (Visualization library for JavaScript) were incorporated into the HIP to create ease in navigating large data sets. With such open source libraries, HIP can give public users a way to combine and explore various data sets by generating multiple chart types (Line, Bar, Pie, Scatter plot) directly from the Google Maps viewport. Every rendered object such as a basin shape on the viewport is clickable, and this is the first step to access the visualization of data sets.
Retrieving the unretrievable in electronic imaging systems: emotions, themes, and stories
NASA Astrophysics Data System (ADS)
Joergensen, Corinne
1999-05-01
New paradigms such as 'affective computing' and user-based research are extending the realm of facets traditionally addressed in IR systems. This paper builds on previous research reported to the electronic imaging community concerning the need to provide access to more abstract attributes of images than those currently amenable to a variety of content-based and text-based indexing techniques. Empirical research suggest that, for visual materials, in addition to standard bibliographic data and broad subject, and in addition to such visually perceptual attributes such as color, texture, shape, and position or focal point, additional access points such as themes, abstract concepts, emotions, stories, and 'people-related' information such as social status would be useful in image retrieval. More recent research demonstrates that similar results are also obtained with 'fine arts' images, which generally have no access provided for these types of attributes. Current efforts to match image attributes as revealed in empirical research with those addressed both in current textural and content-based indexing systems are discussed, as well as the need for new representations for image attributes and for collaboration among diverse communities of researchers.
Visual Information-Processing in the Perception of Features and Objects
1989-01-05
or nodes in a semantic memory network, whereas recall and recognition depend on separate episodic memory traces. In our experiment, we used the same...problem for the account in terms of the separation of episodic from semantic memory , since no pre- existing representations of our line patterns were... semantic memory : amnesic patients were thought to have lost the ability to lay down (or retrieve) episodic traces of autobiographical events, but had
Campana, Florence; Rebollo, Ignacio; Urai, Anne; Wyart, Valentin; Tallon-Baudry, Catherine
2016-05-11
The reverse hierarchy theory (Hochstein and Ahissar, 2002) makes strong, but so far untested, predictions on conscious vision. In this theory, local details encoded in lower-order visual areas are unconsciously processed before being automatically and rapidly combined into global information in higher-order visual areas, where conscious percepts emerge. Contingent on current goals, local details can afterward be consciously retrieved. This model therefore predicts that (1) global information is perceived faster than local details, (2) global information is computed regardless of task demands during early visual processing, and (3) spontaneous vision is dominated by global percepts. We designed novel textured stimuli that are, as opposed to the classic Navon's letters, truly hierarchical (i.e., where global information is solely defined by local information but where local and global orientations can still be manipulated separately). In line with the predictions, observers were systematically faster reporting global than local properties of those stimuli. Second, global information could be decoded from magneto-encephalographic data during early visual processing regardless of task demands. Last, spontaneous subjective reports were dominated by global information and the frequency and speed of spontaneous global perception correlated with the accuracy and speed in the global task. No such correlation was observed for local information. We therefore show that information at different levels of the visual hierarchy is not equally likely to become conscious; rather, conscious percepts emerge preferentially at a global level. We further show that spontaneous reports can be reliable and are tightly linked to objective performance at the global level. Is information encoded at different levels of the visual system (local details in low-level areas vs global shapes in high-level areas) equally likely to become conscious? We designed new hierarchical stimuli and provide the first empirical evidence based on behavioral and MEG data that global information encoded at high levels of the visual hierarchy dominates perception. This result held both in the presence and in the absence of task demands. The preferential emergence of percepts at high levels can account for two properties of conscious vision, namely, the dominance of global percepts and the feeling of visual richness reported independently of the perception of local details. Copyright © 2016 the authors 0270-6474/16/365200-14$15.00/0.
NASA Astrophysics Data System (ADS)
Antani, Sameer K.; Natarajan, Mukil; Long, Jonathan L.; Long, L. Rodney; Thoma, George R.
2005-04-01
The article describes the status of our ongoing R&D at the U.S. National Library of Medicine (NLM) towards the development of an advanced multimedia database biomedical information system that supports content-based image retrieval (CBIR). NLM maintains a collection of 17,000 digitized spinal X-rays along with text survey data from the Second National Health and Nutritional Examination Survey (NHANES II). These data serve as a rich data source for epidemiologists and researchers of osteoarthritis and musculoskeletal diseases. It is currently possible to access these through text keyword queries using our Web-based Medical Information Retrieval System (WebMIRS). CBIR methods developed specifically for biomedical images could offer direct visual searching of these images by means of example image or user sketch. We are building a system which supports hybrid queries that have text and image-content components. R&D goals include developing algorithms for robust image segmentation for localizing and identifying relevant anatomy, labeling the segmented anatomy based on its pathology, developing suitable indexing and similarity matching methods for images and image features, and associating the survey text information for query and retrieval along with the image data. Some highlights of the system developed in MATLAB and Java are: use of a networked or local centralized database for text and image data; flexibility to incorporate new research work; provides a means to control access to system components under development; and use of XML for structured reporting. The article details the design, features, and algorithms in this third revision of this prototype system, CBIR3.
Fargier, Raphaël; Laganaro, Marina
2017-03-01
Picture naming tasks are largely used to elicit the production of specific words and sentences in psycholinguistic and neuroimaging research. However, the generation of lexical concepts from a visual input is clearly not the exclusive way speech production is triggered. In inferential speech encoding, the concept is not provided from a visual input, but is elaborated though semantic and/or episodic associations. It is therefore likely that the cognitive operations leading to lexical selection and word encoding are different in inferential and referential expressive language. In particular, in picture naming lexical selection might ensue from a simple association between a perceptual visual representation and a word with minimal semantic processes, whereas richer semantic associations are involved in lexical retrieval in inferential situations. Here we address this hypothesis by analyzing ERP correlates during word production in a referential and an inferential task. The participants produced the same words elicited from pictures or from short written definitions. The two tasks displayed similar electrophysiological patterns only in the time-period preceding the verbal response. In the stimulus-locked ERPs waveform amplitudes and periods of stable global electrophysiological patterns differed across tasks after the P100 component and until 400-500 ms, suggesting the involvement of different, task-specific neural networks. Based on the analysis of the time-windows affected by specific semantic and lexical variables in each task, we conclude that lexical selection is underpinned by a different set of conceptual and brain processes, with semantic processes clearly preceding word retrieval in naming from definition whereas the semantic information is enriched in parallel with word retrieval in picture naming.
Chiang, Hsueh-Sheng; Eroh, Justin; Spence, Jeffrey S; Motes, Michael A; Maguire, Mandy J; Krawczyk, Daniel C; Brier, Matthew R; Hart, John; Kraut, Michael A
2016-08-01
How the brain combines the neural representations of features that comprise an object in order to activate a coherent object memory is poorly understood, especially when the features are presented in different modalities (visual vs. auditory) and domains (verbal vs. nonverbal). We examined this question using three versions of a modified Semantic Object Retrieval Test, where object memory was probed by a feature presented as a written word, a spoken word, or a picture, followed by a second feature always presented as a visual word. Participants indicated whether each feature pair elicited retrieval of the memory of a particular object. Sixteen subjects completed one of the three versions (N=48 in total) while their EEG were recorded simultaneously. We analyzed EEG data in four separate frequency bands (delta: 1-4Hz, theta: 4-7Hz; alpha: 8-12Hz; beta: 13-19Hz) using a multivariate data-driven approach. We found that alpha power time-locked to response was modulated by both cross-modality (visual vs. auditory) and cross-domain (verbal vs. nonverbal) probing of semantic object memory. In addition, retrieval trials showed greater changes in all frequency bands compared to non-retrieval trials across all stimulus types in both response-locked and stimulus-locked analyses, suggesting dissociable neural subcomponents involved in binding object features to retrieve a memory. We conclude that these findings support both modality/domain-dependent and modality/domain-independent mechanisms during semantic object memory retrieval. Copyright © 2016 Elsevier B.V. All rights reserved.
Vergauwe, Evie; Cowan, Nelson
2015-01-01
We compared two contrasting hypotheses of how multi-featured objects are stored in visual working memory (vWM): as integrated objects or as independent features. A new procedure was devised to examine vWM representations of several concurrently-held objects and their features and our main measure was reaction time (RT), allowing an examination of the real-time search through features and/or objects in an array in vWM. Response speeds to probes with color, shape or both were studied as a function of the number of memorized colored shapes. Four testing groups were created by varying the instructions and the way in which probes with both color and shape were presented. The instructions explicitly either encouraged or discouraged the use of binding information and the task-relevance of binding information was further suggested by presenting probes with both color and shapes as either integrated objects or independent features. Our results show that the unit used for retrieval from vWM depends on the testing situation. Search was fully object-based only when all factors support that basis of search, in which case retrieving two features took no longer than retrieving a single feature. Otherwise, retrieving two features took longer than retrieving a single feature. Additional analyses of change detection latency suggested that, even though different testing situations can result in a stronger emphasis on either the feature dimension or the object dimension, neither one disappears from the representation and both concurrently affect change detection performance. PMID:25705873
Vergauwe, Evie; Cowan, Nelson
2015-09-01
We compared two contrasting hypotheses of how multifeatured objects are stored in visual working memory (vWM); as integrated objects or as independent features. A new procedure was devised to examine vWM representations of several concurrently held objects and their features and our main measure was reaction time (RT), allowing an examination of the real-time search through features and/or objects in an array in vWM. Response speeds to probes with color, shape, or both were studied as a function of the number of memorized colored shapes. Four testing groups were created by varying the instructions and the way in which probes with both color and shape were presented. The instructions explicitly either encouraged or discouraged the use of binding information and the task-relevance of binding information was further suggested by presenting probes with both color and shapes as either integrated objects or independent features. Our results show that the unit used for retrieval from vWM depends on the testing situation. Search was fully object-based only when all factors support that basis of search, in which case retrieving 2 features took no longer than retrieving a single feature. Otherwise, retrieving 2 features took longer than retrieving a single feature. Additional analyses of change detection latency suggested that, even though different testing situations can result in a stronger emphasis on either the feature dimension or the object dimension, neither one disappears from the representation and both concurrently affect change detection performance. (c) 2015 APA, all rights reserved).
A Unified Mathematical Definition of Classical Information Retrieval.
ERIC Educational Resources Information Center
Dominich, Sandor
2000-01-01
Presents a unified mathematical definition for the classical models of information retrieval and identifies a mathematical structure behind relevance feedback. Highlights include vector information retrieval; probabilistic information retrieval; and similarity information retrieval. (Contains 118 references.) (Author/LRW)
Parallel pathways for cross-modal memory retrieval in Drosophila.
Zhang, Xiaonan; Ren, Qingzhong; Guo, Aike
2013-05-15
Memory-retrieval processing of cross-modal sensory preconditioning is vital for understanding the plasticity underlying the interactions between modalities. As part of the sensory preconditioning paradigm, it has been hypothesized that the conditioned response to an unreinforced cue depends on the memory of the reinforced cue via a sensory link between the two cues. To test this hypothesis, we studied cross-modal memory-retrieval processing in a genetically tractable model organism, Drosophila melanogaster. By expressing the dominant temperature-sensitive shibire(ts1) (shi(ts1)) transgene, which blocks synaptic vesicle recycling of specific neural subsets with the Gal4/UAS system at the restrictive temperature, we specifically blocked visual and olfactory memory retrieval, either alone or in combination; memory acquisition remained intact for these modalities. Blocking the memory retrieval of the reinforced olfactory cues did not impair the conditioned response to the unreinforced visual cues or vice versa, in contrast to the canonical memory-retrieval processing of sensory preconditioning. In addition, these conditioned responses can be abolished by blocking the memory retrieval of the two modalities simultaneously. In sum, our results indicated that a conditioned response to an unreinforced cue in cross-modal sensory preconditioning can be recalled through parallel pathways.
Estimated capacity of object files in visual short-term memory is not improved by retrieval cueing.
Saiki, Jun; Miyatsuji, Hirofumi
2009-03-23
Visual short-term memory (VSTM) has been claimed to maintain three to five feature-bound object representations. Some results showing smaller capacity estimates for feature binding memory have been interpreted as the effects of interference in memory retrieval. However, change-detection tasks may not properly evaluate complex feature-bound representations such as triple conjunctions in VSTM. To understand the general type of feature-bound object representation, evaluation of triple conjunctions is critical. To test whether interference occurs in memory retrieval for complete object file representations in a VSTM task, we cued retrieval in novel paradigms that directly evaluate the memory for triple conjunctions, in comparison with a simple change-detection task. In our multiple object permanence tracking displays, observers monitored for a switch in feature combination between objects during an occlusion period, and we found that a retrieval cue provided no benefit with the triple conjunction tasks, but significant facilitation with the change-detection task, suggesting that low capacity estimates of object file memory in VSTM reflect a limit on maintenance, not retrieval.
Development and validation of satellite-based estimates of surface visibility
NASA Astrophysics Data System (ADS)
Brunner, J.; Pierce, R. B.; Lenzen, A.
2016-02-01
A satellite-based surface visibility retrieval has been developed using Moderate Resolution Imaging Spectroradiometer (MODIS) measurements as a proxy for Advanced Baseline Imager (ABI) data from the next generation of Geostationary Operational Environmental Satellites (GOES-R). The retrieval uses a multiple linear regression approach to relate satellite aerosol optical depth, fog/low cloud probability and thickness retrievals, and meteorological variables from numerical weather prediction forecasts to National Weather Service Automated Surface Observing System (ASOS) surface visibility measurements. Validation using independent ASOS measurements shows that the GOES-R ABI surface visibility retrieval (V) has an overall success rate of 64.5 % for classifying clear (V ≥ 30 km), moderate (10 km ≤ V < 30 km), low (2 km ≤ V < 10 km), and poor (V < 2 km) visibilities and shows the most skill during June through September, when Heidke skill scores are between 0.2 and 0.4. We demonstrate that the aerosol (clear-sky) component of the GOES-R ABI visibility retrieval can be used to augment measurements from the United States Environmental Protection Agency (EPA) and National Park Service (NPS) Interagency Monitoring of Protected Visual Environments (IMPROVE) network and provide useful information to the regional planning offices responsible for developing mitigation strategies required under the EPA's Regional Haze Rule, particularly during regional haze events associated with smoke from wildfires.
Development and validation of satellite based estimates of surface visibility
NASA Astrophysics Data System (ADS)
Brunner, J.; Pierce, R. B.; Lenzen, A.
2015-10-01
A satellite based surface visibility retrieval has been developed using Moderate Resolution Imaging Spectroradiometer (MODIS) measurements as a proxy for Advanced Baseline Imager (ABI) data from the next generation of Geostationary Operational Environmental Satellites (GOES-R). The retrieval uses a multiple linear regression approach to relate satellite aerosol optical depth, fog/low cloud probability and thickness retrievals, and meteorological variables from numerical weather prediction forecasts to National Weather Service Automated Surface Observing System (ASOS) surface visibility measurements. Validation using independent ASOS measurements shows that the GOES-R ABI surface visibility retrieval (V) has an overall success rate of 64.5% for classifying Clear (V ≥ 30 km), Moderate (10 km ≤ V < 30 km), Low (2 km ≤ V < 10 km) and Poor (V < 2 km) visibilities and shows the most skill during June through September, when Heidke skill scores are between 0.2 and 0.4. We demonstrate that the aerosol (clear sky) component of the GOES-R ABI visibility retrieval can be used to augment measurements from the United States Environmental Protection Agency (EPA) and National Park Service (NPS) Interagency Monitoring of Protected Visual Environments (IMPROVE) network, and provide useful information to the regional planning offices responsible for developing mitigation strategies required under the EPA's Regional Haze Rule, particularly during regional haze events associated with smoke from wildfires.
Location-Driven Image Retrieval for Images Collected by a Mobile Robot
NASA Astrophysics Data System (ADS)
Tanaka, Kanji; Hirayama, Mitsuru; Okada, Nobuhiro; Kondo, Eiji
Mobile robot teleoperation is a method for a human user to interact with a mobile robot over time and distance. Successful teleoperation depends on how well images taken by the mobile robot are visualized to the user. To enhance the efficiency and flexibility of the visualization, an image retrieval system on such a robot’s image database would be very useful. The main difference of the robot’s image database from standard image databases is that various relevant images exist due to variety of viewing conditions. The main contribution of this paper is to propose an efficient retrieval approach, named location-driven approach, utilizing correlation between visual features and real world locations of images. Combining the location-driven approach with the conventional feature-driven approach, our goal can be viewed as finding an optimal classifier between relevant and irrelevant feature-location pairs. An active learning technique based on support vector machine is extended for this aim.
Selective Convolutional Descriptor Aggregation for Fine-Grained Image Retrieval.
Wei, Xiu-Shen; Luo, Jian-Hao; Wu, Jianxin; Zhou, Zhi-Hua
2017-06-01
Deep convolutional neural network models pre-trained for the ImageNet classification task have been successfully adopted to tasks in other domains, such as texture description and object proposal generation, but these tasks require annotations for images in the new domain. In this paper, we focus on a novel and challenging task in the pure unsupervised setting: fine-grained image retrieval. Even with image labels, fine-grained images are difficult to classify, letting alone the unsupervised retrieval task. We propose the selective convolutional descriptor aggregation (SCDA) method. The SCDA first localizes the main object in fine-grained images, a step that discards the noisy background and keeps useful deep descriptors. The selected descriptors are then aggregated and the dimensionality is reduced into a short feature vector using the best practices we found. The SCDA is unsupervised, using no image label or bounding box annotation. Experiments on six fine-grained data sets confirm the effectiveness of the SCDA for fine-grained image retrieval. Besides, visualization of the SCDA features shows that they correspond to visual attributes (even subtle ones), which might explain SCDA's high-mean average precision in fine-grained retrieval. Moreover, on general image retrieval data sets, the SCDA achieves comparable retrieval results with the state-of-the-art general image retrieval approaches.
GENESIS: GPS Environmental and Earth Science Information System
NASA Technical Reports Server (NTRS)
Hajj, George
1999-01-01
This presentation reviews the GPS ENvironmental and Earth Science Information System (GENESIS). The objectives of GENESIS are outlined (1) Data Archiving, searching and distribution for science data products derived from Space borne TurboRogue Space Receivers for GPS science and other ground based GPS receivers, (2) Data browsing using integrated visualization tools, (3) Interactive web/java-based data search and retrieval, (4) Data subscription service, (5) Data migration from existing GPS archived data, (6) On-line help and documentation, and (7) participation in the WP-ESIP federation. The presentation reviews the products and services of Genesis, and the technology behind the system.
Dummel, Sebastian; Rummel, Jan
2016-11-01
Take-the-best (TTB) is a decision strategy according to which attributes about choice options are sequentially processed in descending order of validity, and attribute processing is stopped once an attribute discriminates between options. Consequently, TTB-decisions rely on only one, the best discriminating, attribute, and lower-valid attributes need not be processed because they are TTB-irrelevant. Recent research suggests, however, that when attribute information is visually present during decision-making, TTB-irrelevant attributes are processed and integrated into decisions nonetheless. To examine whether TTB-irrelevant attributes are retrieved and integrated when decisions are made memory-based, we tested whether the consistency of a TTB-irrelevant attribute affects TTB-users' decision behaviour in a memory-based decision task. Participants first learned attribute configurations of several options. Afterwards, they made several decisions between two of the options, and we manipulated conflict between the second-best attribute and the TTB-decision. We assessed participants' decision confidence and the proportion of TTB-inconsistent choices. According to TTB, TTB-irrelevant attributes should not affect confidence and choices, because these attributes should not be retrieved. Results showed, however, that TTB-users were less confident and made more TTB-inconsistent choices when TTB-irrelevant information was in conflict with the TTB-decision than when it was not, suggesting that TTB-users retrieved and integrated TTB-irrelevant information.
Discriminative Multi-View Interactive Image Re-Ranking.
Li, Jun; Xu, Chang; Yang, Wankou; Sun, Changyin; Tao, Dacheng
2017-07-01
Given an unreliable visual patterns and insufficient query information, content-based image retrieval is often suboptimal and requires image re-ranking using auxiliary information. In this paper, we propose a discriminative multi-view interactive image re-ranking (DMINTIR), which integrates user relevance feedback capturing users' intentions and multiple features that sufficiently describe the images. In DMINTIR, heterogeneous property features are incorporated in the multi-view learning scheme to exploit their complementarities. In addition, a discriminatively learned weight vector is obtained to reassign updated scores and target images for re-ranking. Compared with other multi-view learning techniques, our scheme not only generates a compact representation in the latent space from the redundant multi-view features but also maximally preserves the discriminative information in feature encoding by the large-margin principle. Furthermore, the generalization error bound of the proposed algorithm is theoretically analyzed and shown to be improved by the interactions between the latent space and discriminant function learning. Experimental results on two benchmark data sets demonstrate that our approach boosts baseline retrieval quality and is competitive with the other state-of-the-art re-ranking strategies.
Ma, Bosen; Wang, Xiaoyun; Li, Degao
2015-01-01
To separate the contribution of phonological from that of visual-orthographic information in the recognition of a Chinese word that is composed of one or two Chinese characters, we conducted two experiments in a priming task of semantic categorization (PTSC), in which length (one- or two-character words), relation, prime (related or unrelated prime-target pairs), and SOA (47, 87, or 187 ms) were manipulated. The prime was similar to the target in meaning or in visual configuration in Experiment A and in meaning or in pronunciation in Experiment B. The results indicate that the two-character words were similar to the one-character words but were less demanding of cognitive resources than the one-character words in the processing of phonological, visual-orthographic, and semantic information. The phonological primes had a facilitating effect at the SOA of 47 ms but an inhibitory effect at the SOA of 187 ms on the participants' reaction times; the visual-orthographic primes only had an inhibitory influence on the participants' reaction times at the SOA of 187 ms. The visual configuration of a Chinese word of one or two Chinese characters has its own contribution in helping retrieve the word's meanings; similarly, the phonological configuration of a one- or two-character word plays its own role in triggering activations of the word's semantic representations.
Kauppinen, Tomi; Keßler, Carsten; Fritz, Fleur
2014-01-01
Background Healthcare organizations around the world are challenged by pressures to reduce cost, improve coordination and outcome, and provide more with less. This requires effective planning and evidence-based practice by generating important information from available data. Thus, flexible and user-friendly ways to represent, query, and visualize health data becomes increasingly important. International organizations such as the World Health Organization (WHO) regularly publish vital data on priority health topics that can be utilized for public health policy and health service development. However, the data in most portals is displayed in either Excel or PDF formats, which makes information discovery and reuse difficult. Linked Open Data (LOD)—a new Semantic Web set of best practice of standards to publish and link heterogeneous data—can be applied to the representation and management of public level health data to alleviate such challenges. However, the technologies behind building LOD systems and their effectiveness for health data are yet to be assessed. Objective The objective of this study is to evaluate whether Linked Data technologies are potential options for health information representation, visualization, and retrieval systems development and to identify the available tools and methodologies to build Linked Data-based health information systems. Methods We used the Resource Description Framework (RDF) for data representation, Fuseki triple store for data storage, and Sgvizler for information visualization. Additionally, we integrated SPARQL query interface for interacting with the data. We primarily use the WHO health observatory dataset to test the system. All the data were represented using RDF and interlinked with other related datasets on the Web of Data using Silk—a link discovery framework for Web of Data. A preliminary usability assessment was conducted following the System Usability Scale (SUS) method. Results We developed an LOD-based health information representation, querying, and visualization system by using Linked Data tools. We imported more than 20,000 HIV-related data elements on mortality, prevalence, incidence, and related variables, which are freely available from the WHO global health observatory database. Additionally, we automatically linked 5312 data elements from DBpedia, Bio2RDF, and LinkedCT using the Silk framework. The system users can retrieve and visualize health information according to their interests. For users who are not familiar with SPARQL queries, we integrated a Linked Data search engine interface to search and browse the data. We used the system to represent and store the data, facilitating flexible queries and different kinds of visualizations. The preliminary user evaluation score by public health data managers and users was 82 on the SUS usability measurement scale. The need to write queries in the interface was the main reported difficulty of LOD-based systems to the end user. Conclusions The system introduced in this article shows that current LOD technologies are a promising alternative to represent heterogeneous health data in a flexible and reusable manner so that they can serve intelligent queries, and ultimately support decision-making. However, the development of advanced text-based search engines is necessary to increase its usability especially for nontechnical users. Further research with large datasets is recommended in the future to unfold the potential of Linked Data and Semantic Web for future health information systems development. PMID:25601195
Tilahun, Binyam; Kauppinen, Tomi; Keßler, Carsten; Fritz, Fleur
2014-10-25
Healthcare organizations around the world are challenged by pressures to reduce cost, improve coordination and outcome, and provide more with less. This requires effective planning and evidence-based practice by generating important information from available data. Thus, flexible and user-friendly ways to represent, query, and visualize health data becomes increasingly important. International organizations such as the World Health Organization (WHO) regularly publish vital data on priority health topics that can be utilized for public health policy and health service development. However, the data in most portals is displayed in either Excel or PDF formats, which makes information discovery and reuse difficult. Linked Open Data (LOD)-a new Semantic Web set of best practice of standards to publish and link heterogeneous data-can be applied to the representation and management of public level health data to alleviate such challenges. However, the technologies behind building LOD systems and their effectiveness for health data are yet to be assessed. The objective of this study is to evaluate whether Linked Data technologies are potential options for health information representation, visualization, and retrieval systems development and to identify the available tools and methodologies to build Linked Data-based health information systems. We used the Resource Description Framework (RDF) for data representation, Fuseki triple store for data storage, and Sgvizler for information visualization. Additionally, we integrated SPARQL query interface for interacting with the data. We primarily use the WHO health observatory dataset to test the system. All the data were represented using RDF and interlinked with other related datasets on the Web of Data using Silk-a link discovery framework for Web of Data. A preliminary usability assessment was conducted following the System Usability Scale (SUS) method. We developed an LOD-based health information representation, querying, and visualization system by using Linked Data tools. We imported more than 20,000 HIV-related data elements on mortality, prevalence, incidence, and related variables, which are freely available from the WHO global health observatory database. Additionally, we automatically linked 5312 data elements from DBpedia, Bio2RDF, and LinkedCT using the Silk framework. The system users can retrieve and visualize health information according to their interests. For users who are not familiar with SPARQL queries, we integrated a Linked Data search engine interface to search and browse the data. We used the system to represent and store the data, facilitating flexible queries and different kinds of visualizations. The preliminary user evaluation score by public health data managers and users was 82 on the SUS usability measurement scale. The need to write queries in the interface was the main reported difficulty of LOD-based systems to the end user. The system introduced in this article shows that current LOD technologies are a promising alternative to represent heterogeneous health data in a flexible and reusable manner so that they can serve intelligent queries, and ultimately support decision-making. However, the development of advanced text-based search engines is necessary to increase its usability especially for nontechnical users. Further research with large datasets is recommended in the future to unfold the potential of Linked Data and Semantic Web for future health information systems development.
CDAPubMed: a browser extension to retrieve EHR-based biomedical literature.
Perez-Rey, David; Jimenez-Castellanos, Ana; Garcia-Remesal, Miguel; Crespo, Jose; Maojo, Victor
2012-04-05
Over the last few decades, the ever-increasing output of scientific publications has led to new challenges to keep up to date with the literature. In the biomedical area, this growth has introduced new requirements for professionals, e.g., physicians, who have to locate the exact papers that they need for their clinical and research work amongst a huge number of publications. Against this backdrop, novel information retrieval methods are even more necessary. While web search engines are widespread in many areas, facilitating access to all kinds of information, additional tools are required to automatically link information retrieved from these engines to specific biomedical applications. In the case of clinical environments, this also means considering aspects such as patient data security and confidentiality or structured contents, e.g., electronic health records (EHRs). In this scenario, we have developed a new tool to facilitate query building to retrieve scientific literature related to EHRs. We have developed CDAPubMed, an open-source web browser extension to integrate EHR features in biomedical literature retrieval approaches. Clinical users can use CDAPubMed to: (i) load patient clinical documents, i.e., EHRs based on the Health Level 7-Clinical Document Architecture Standard (HL7-CDA), (ii) identify relevant terms for scientific literature search in these documents, i.e., Medical Subject Headings (MeSH), automatically driven by the CDAPubMed configuration, which advanced users can optimize to adapt to each specific situation, and (iii) generate and launch literature search queries to a major search engine, i.e., PubMed, to retrieve citations related to the EHR under examination. CDAPubMed is a platform-independent tool designed to facilitate literature searching using keywords contained in specific EHRs. CDAPubMed is visually integrated, as an extension of a widespread web browser, within the standard PubMed interface. It has been tested on a public dataset of HL7-CDA documents, returning significantly fewer citations since queries are focused on characteristics identified within the EHR. For instance, compared with more than 200,000 citations retrieved by breast neoplasm, fewer than ten citations were retrieved when ten patient features were added using CDAPubMed. This is an open source tool that can be freely used for non-profit purposes and integrated with other existing systems.
CDAPubMed: a browser extension to retrieve EHR-based biomedical literature
2012-01-01
Background Over the last few decades, the ever-increasing output of scientific publications has led to new challenges to keep up to date with the literature. In the biomedical area, this growth has introduced new requirements for professionals, e.g., physicians, who have to locate the exact papers that they need for their clinical and research work amongst a huge number of publications. Against this backdrop, novel information retrieval methods are even more necessary. While web search engines are widespread in many areas, facilitating access to all kinds of information, additional tools are required to automatically link information retrieved from these engines to specific biomedical applications. In the case of clinical environments, this also means considering aspects such as patient data security and confidentiality or structured contents, e.g., electronic health records (EHRs). In this scenario, we have developed a new tool to facilitate query building to retrieve scientific literature related to EHRs. Results We have developed CDAPubMed, an open-source web browser extension to integrate EHR features in biomedical literature retrieval approaches. Clinical users can use CDAPubMed to: (i) load patient clinical documents, i.e., EHRs based on the Health Level 7-Clinical Document Architecture Standard (HL7-CDA), (ii) identify relevant terms for scientific literature search in these documents, i.e., Medical Subject Headings (MeSH), automatically driven by the CDAPubMed configuration, which advanced users can optimize to adapt to each specific situation, and (iii) generate and launch literature search queries to a major search engine, i.e., PubMed, to retrieve citations related to the EHR under examination. Conclusions CDAPubMed is a platform-independent tool designed to facilitate literature searching using keywords contained in specific EHRs. CDAPubMed is visually integrated, as an extension of a widespread web browser, within the standard PubMed interface. It has been tested on a public dataset of HL7-CDA documents, returning significantly fewer citations since queries are focused on characteristics identified within the EHR. For instance, compared with more than 200,000 citations retrieved by breast neoplasm, fewer than ten citations were retrieved when ten patient features were added using CDAPubMed. This is an open source tool that can be freely used for non-profit purposes and integrated with other existing systems. PMID:22480327
Exploiting visual search theory to infer social interactions
NASA Astrophysics Data System (ADS)
Rota, Paolo; Dang-Nguyen, Duc-Tien; Conci, Nicola; Sebe, Nicu
2013-03-01
In this paper we propose a new method to infer human social interactions using typical techniques adopted in literature for visual search and information retrieval. The main piece of information we use to discriminate among different types of interactions is provided by proxemics cues acquired by a tracker, and used to distinguish between intentional and casual interactions. The proxemics information has been acquired through the analysis of two different metrics: on the one hand we observe the current distance between subjects, and on the other hand we measure the O-space synergy between subjects. The obtained values are taken at every time step over a temporal sliding window, and processed in the Discrete Fourier Transform (DFT) domain. The features are eventually merged into an unique array, and clustered using the K-means algorithm. The clusters are reorganized using a second larger temporal window into a Bag Of Words framework, so as to build the feature vector that will feed the SVM classifier.
Iterative Integration of Visual Insights during Scalable Patent Search and Analysis.
Koch, S; Bosch, H; Giereth, M; Ertl, T
2011-05-01
Patents are of growing importance in current economic markets. Analyzing patent information has, therefore, become a common task for many interest groups. As a prerequisite for patent analysis, extensive search for relevant patent information is essential. Unfortunately, the complexity of patent material inhibits a straightforward retrieval of all relevant patent documents and leads to iterative, time-consuming approaches in practice. Already the amount of patent data to be analyzed poses challenges with respect to scalability. Further scalability issues arise concerning the diversity of users and the large variety of analysis tasks. With "PatViz", a system for interactive analysis of patent information has been developed addressing scalability at various levels. PatViz provides a visual environment allowing for interactive reintegration of insights into subsequent search iterations, thereby bridging the gap between search and analytic processes. Because of its extensibility, we expect that the approach we have taken can be employed in different problem domains that require high quality of search results regarding their completeness.
GeoCrystal: graphic-interactive access to geodata archives
NASA Astrophysics Data System (ADS)
Goebel, Stefan; Haist, Joerg; Jasnoch, Uwe
2002-03-01
Recently there is spent a lot of effort to establish information systems and global infrastructures enabling both data suppliers and users to describe (-> eCommerce, metadata) as well as to find appropriate data. Examples for this are metadata information systems, online-shops or portals for geodata. The main disadvantages of existing approaches are insufficient methods and mechanisms leading users to (e.g. spatial) data archives. This affects aspects concerning usability and personalization in general as well as visual feedback techniques in the different steps of the information retrieval process. Several approaches aim at the improvement of graphical user interfaces by using intuitive metaphors, but only some of them offer 3D interfaces in the form of information landscapes or geographic result scenes in the context of information systems for geodata. This paper presents GeoCrystal, which basic idea is to adopt Venn diagrams to compose complex queries and to visualize search results in a 3D information and navigation space for geodata. These concepts are enhanced with spatial metaphors and 3D information landscapes (library for geodata) wherein users can specify searches for appropriate geodata and are enabled to graphic-interactively communicate with search results (book metaphor).
Image and information management system
NASA Technical Reports Server (NTRS)
Robertson, Tina L. (Inventor); Raney, Michael C. (Inventor); Dougherty, Dennis M. (Inventor); Kent, Peter C. (Inventor); Brucker, Russell X. (Inventor); Lampert, Daryl A. (Inventor)
2009-01-01
A system and methods through which pictorial views of an object's configuration, arranged in a hierarchical fashion, are navigated by a person to establish a visual context within the configuration. The visual context is automatically translated by the system into a set of search parameters driving retrieval of structured data and content (images, documents, multimedia, etc.) associated with the specific context. The system places ''hot spots'', or actionable regions, on various portions of the pictorials representing the object. When a user interacts with an actionable region, a more detailed pictorial from the hierarchy is presented representing that portion of the object, along with real-time feedback in the form of a popup pane containing information about that region, and counts-by-type reflecting the number of items that are available within the system associated with the specific context and search filters established at that point in time.
Image and information management system
NASA Technical Reports Server (NTRS)
Robertson, Tina L. (Inventor); Kent, Peter C. (Inventor); Raney, Michael C. (Inventor); Dougherty, Dennis M. (Inventor); Brucker, Russell X. (Inventor); Lampert, Daryl A. (Inventor)
2007-01-01
A system and methods through which pictorial views of an object's configuration, arranged in a hierarchical fashion, are navigated by a person to establish a visual context within the configuration. The visual context is automatically translated by the system into a set of search parameters driving retrieval of structured data and content (images, documents, multimedia, etc.) associated with the specific context. The system places hot spots, or actionable regions, on various portions of the pictorials representing the object. When a user interacts with an actionable region, a more detailed pictorial from the hierarchy is presented representing that portion of the object, along with real-time feedback in the form of a popup pane containing information about that region, and counts-by-type reflecting the number of items that are available within the system associated with the specific context and search filters established at that point in time.
Overcoming default categorical bias in spatial memory.
Sampaio, Cristina; Wang, Ranxiao Frances
2010-12-01
In the present study, we investigated whether a strong default categorical bias can be overcome in spatial memory by using alternative membership information. In three experiments, we tested location memory in a circular space while providing participants with an alternative categorization. We found that visual presentation of the boundaries of the alternative categories (Experiment 1) did not induce the use of the alternative categories in estimation. In contrast, visual cuing of the alternative category membership of a target (Experiment 2) and unique target feature information associated with each alternative category (Experiment 3) successfully led to the use of the alternative categories in estimation. Taken together, the results indicate that default categorical bias in spatial memory can be overcome when appropriate cues are provided. We discuss how these findings expand the category adjustment model (Huttenlocher, Hedges, & Duncan, 1991) in spatial memory by proposing a retrieval-based category adjustment (RCA) model.
Neural correlates of differential retrieval orientation: Sustained and item-related components.
Woodruff, C Chad; Uncapher, Melina R; Rugg, Michael D
2006-01-01
Retrieval orientation refers to a cognitive state that biases processing of retrieval cues in service of a specific goal. The present study used a mixed fMRI design to investigate whether adoption of different retrieval orientations - as indexed by differences in the activity elicited by retrieval cues corresponding to unstudied items - is associated with differences in the state-related activity sustained across a block of test trials sharing a common retrieval goal. Subjects studied mixed lists comprising visually presented words and pictures. They then undertook a series of short test blocks in which all test items were visually presented words. The blocks varied according to whether the test items were used to cue retrieval of studied words or studied pictures. In several regions, neural activity elicited by correctly classified new items differed according to whether words or pictures were the targeted material. The loci of these effects suggest that one factor driving differential cue processing is modulation of the degree of overlap between cue and targeted memory representations. In addition to these item-related effects, neural activity sustained throughout the test blocks also differed according to the nature of the targeted material. These findings indicate that the adoption of different retrieval orientations is associated with distinct neural states. The loci of these sustained effects were distinct from those where new item activity varied, suggesting that the effects may play a role in biasing retrieval cue processing in favor of the current retrieval goal.
Gene Expression Omnibus (GEO): Microarray data storage, submission, retrieval, and analysis
Barrett, Tanya
2006-01-01
The Gene Expression Omnibus (GEO) repository at the National Center for Biotechnology Information (NCBI) archives and freely distributes high-throughput molecular abundance data, predominantly gene expression data generated by DNA microarray technology. The database has a flexible design that can handle diverse styles of both unprocessed and processed data in a MIAME- (Minimum Information About a Microarray Experiment) supportive infrastructure that promotes fully annotated submissions. GEO currently stores about a billion individual gene expression measurements, derived from over 100 organisms, submitted by over 1,500 laboratories, addressing a wide range of biological phenomena. To maximize the utility of these data, several user-friendly Web-based interfaces and applications have been implemented that enable effective exploration, query, and visualization of these data, at the level of individual genes or entire studies. This chapter describes how the data are stored, submission procedures, and mechanisms for data retrieval and query. GEO is publicly accessible at http://www.ncbi.nlm.nih.gov/projects/geo/. PMID:16939800
A new visual navigation system for exploring biomedical Open Educational Resource (OER) videos.
Zhao, Baoquan; Xu, Songhua; Lin, Shujin; Luo, Xiaonan; Duan, Lian
2016-04-01
Biomedical videos as open educational resources (OERs) are increasingly proliferating on the Internet. Unfortunately, seeking personally valuable content from among the vast corpus of quality yet diverse OER videos is nontrivial due to limitations of today's keyword- and content-based video retrieval techniques. To address this need, this study introduces a novel visual navigation system that facilitates users' information seeking from biomedical OER videos in mass quantity by interactively offering visual and textual navigational clues that are both semantically revealing and user-friendly. The authors collected and processed around 25 000 YouTube videos, which collectively last for a total length of about 4000 h, in the broad field of biomedical sciences for our experiment. For each video, its semantic clues are first extracted automatically through computationally analyzing audio and visual signals, as well as text either accompanying or embedded in the video. These extracted clues are subsequently stored in a metadata database and indexed by a high-performance text search engine. During the online retrieval stage, the system renders video search results as dynamic web pages using a JavaScript library that allows users to interactively and intuitively explore video content both efficiently and effectively.ResultsThe authors produced a prototype implementation of the proposed system, which is publicly accessible athttps://patentq.njit.edu/oer To examine the overall advantage of the proposed system for exploring biomedical OER videos, the authors further conducted a user study of a modest scale. The study results encouragingly demonstrate the functional effectiveness and user-friendliness of the new system for facilitating information seeking from and content exploration among massive biomedical OER videos. Using the proposed tool, users can efficiently and effectively find videos of interest, precisely locate video segments delivering personally valuable information, as well as intuitively and conveniently preview essential content of a single or a collection of videos. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
A vision system planner for increasing the autonomy of the Extravehicular Activity Helper/Retriever
NASA Technical Reports Server (NTRS)
Magee, Michael
1993-01-01
The Extravehicular Activity Retriever (EVAR) is a robotic device currently being developed by the Automation and Robotics Division at the NASA Johnson Space Center to support activities in the neighborhood of the Space Shuttle or Space Station Freedom. As the name implies, the Retriever's primary function will be to provide the capability to retrieve tools and equipment or other objects which have become detached from the spacecraft, but it will also be able to rescue a crew member who may have become inadvertently de-tethered. Later goals will include cooperative operations between a crew member and the Retriever such as fetching a tool that is required for servicing or maintenance operations. This paper documents a preliminary design for a Vision System Planner (VSP) for the EVAR that is capable of achieving visual objectives provided to it by a high level task planner. Typical commands which the task planner might issue to the VSP relate to object recognition, object location determination, and obstacle detection. Upon receiving a command from the task planner, the VSP then plans a sequence of actions to achieve the specified objective using a model-based reasoning approach. This sequence may involve choosing an appropriate sensor, selecting an algorithm to process the data, reorienting the sensor, adjusting the effective resolution of the image using lens zooming capability, and/or requesting the task planner to reposition the EVAR to obtain a different view of the object. An initial version of the Vision System Planner which realizes the above capabilities using simulated images has been implemented and tested. The remaining sections describe the architecture and capabilities of the VSP and its relationship to the high level task planner. In addition, typical plans that are generated to achieve visual goals for various scenarios are discussed. Specific topics to be addressed will include object search strategies, repositioning of the EVAR to improve the quality of information obtained from the sensors, and complementary usage of the sensors and redundant capabilities.
Simulating Navigation with Virtual 3d Geovisualizations - a Focus on Memory Related Factors
NASA Astrophysics Data System (ADS)
Lokka, I.; Çöltekin, A.
2016-06-01
The use of virtual environments (VE) for navigation-related studies, such as spatial cognition and path retrieval has been widely adopted in cognitive psychology and related fields. What motivates the use of VEs for such studies is that, as opposed to real-world, we can control for the confounding variables in simulated VEs. When simulating a geographic environment as a virtual world with the intention to train navigational memory in humans, an effective and efficient visual design is important to facilitate the amount of recall. However, it is not yet clear what amount of information should be included in such visual designs intended to facilitate remembering: there can be too little or too much of it. Besides the amount of information or level of detail, the types of visual features (`elements' in a visual scene) that should be included in the representations to create memorable scenes and paths must be defined. We analyzed the literature in cognitive psychology, geovisualization and information visualization, and identified the key factors for studying and evaluating geovisualization designs for their function to support and strengthen human navigational memory. The key factors we identified are: i) the individual abilities and age of the users, ii) the level of realism (LOR) included in the representations and iii) the context in which the navigation is performed, thus specific tasks within a case scenario. Here we present a concise literature review and our conceptual development for follow-up experiments.
Characterize Aerosols from MODIS/MISR/OMI/MERRA-2: Dynamic Image Browse Perspective
NASA Astrophysics Data System (ADS)
Wei, J. C.; Yang, W.; Shen, S.; Zhao, P.; Albayrak, A.; Johnson, J. E.; Kempler, S. J.; Pham, L.
2016-12-01
Among the known atmospheric constituents, aerosols still represent the greatest uncertainty in climate research. To understand the uncertainty is to bring altogether of observational (in-situ and remote sensing) and modeling datasets and inter-compare them synergistically for a wide variety of applications that can bring far-reaching benefits to the science community and the broader society. These benefits can best be achieved if these earth science data (satellite and modeling) are well utilized and interpreted. Unfortunately, this is not always the case, despite the abundance and relative maturity of numerous satellite-borne sensors routinely measure aerosols. There is often disagreement between similar aerosol parameters retrieved from different sensors, leaving users confused as to which sensors to trust for answering important science questions about the distribution, properties, and impacts of aerosols. NASA Goddard Earth Sciences Data and Information Services Center (GES DISC) have developed multiple MAPSS (Multi-sensor Aerosol Products Sampling System) applications as a part of Giovanni (Geospatial Interactive Online Visualization and Analysis Interface) data visualization and analysis tool since 2007. The MAPSS database provides spatio-temporal statistics for multiple spatial spaceborne Level 2 aerosol products (MODIS Terra, MODIS Aqua, MISR, POLDER, OMI, CALIOP, SeaWiFS Deep Blue, and VIIRS) sampled over AERONET ground stations. In this presentation, I will demonstrate a new visualization service (NASA Level 2 Data Quality Visualization, DQViz) supporting various visualization and data accessing capabilities from satellite Level 2 (MODIS/MISR/OMI) and long term assimilated aerosols from NASA Modern-Era Retrospective analysis for Research and Applications, Version 2 (MERRA-2 displaying at their own native physical-retrieved spatial resolution. Functionality will include selecting data sources (e.g., multiple parameters under the same measurement), defining area-of-interest and temporal extents, zooming, panning, overlaying, sliding, and data subsetting and reformatting.
Dew, Ilana T. Z.; Ritchey, Maureen; LaBar, Kevin S.; Cabeza, Roberto
2014-01-01
A fundamental idea in memory research is that items are more likely to be remembered if encoded with a semantic, rather than perceptual, processing strategy. Interestingly, this effect has been shown to reverse for emotionally arousing materials, such that perceptual processing enhances memory for emotional information or events. The current fMRI study investigated the neural mechanisms of this effect by testing how neural activations during emotional memory retrieval are influenced by the prior encoding strategy. Participants incidentally encoded emotional and neutral pictures under instructions to attend to either semantic or perceptual properties of each picture. Recognition memory was tested two days later. fMRI analyses yielded three main findings. First, right amygdalar activity associated with emotional memory strength was enhanced by prior perceptual processing. Second, prior perceptual processing of emotional pictures produced a stronger effect on recollection- than familiarity-related activations in the right amygdala and left hippocampus. Finally, prior perceptual processing enhanced amygdalar connectivity with regions strongly associated with retrieval success, including hippocampal/parahippocampal regions, visual cortex, and ventral parietal cortex. Taken together, the results specify how encoding orientations yield alterations in brain systems that retrieve emotional memories. PMID:24380867
Hybrid Histogram Descriptor: A Fusion Feature Representation for Image Retrieval.
Feng, Qinghe; Hao, Qiaohong; Chen, Yuqi; Yi, Yugen; Wei, Ying; Dai, Jiangyan
2018-06-15
Currently, visual sensors are becoming increasingly affordable and fashionable, acceleratingly the increasing number of image data. Image retrieval has attracted increasing interest due to space exploration, industrial, and biomedical applications. Nevertheless, designing effective feature representation is acknowledged as a hard yet fundamental issue. This paper presents a fusion feature representation called a hybrid histogram descriptor (HHD) for image retrieval. The proposed descriptor comprises two histograms jointly: a perceptually uniform histogram which is extracted by exploiting the color and edge orientation information in perceptually uniform regions; and a motif co-occurrence histogram which is acquired by calculating the probability of a pair of motif patterns. To evaluate the performance, we benchmarked the proposed descriptor on RSSCN7, AID, Outex-00013, Outex-00014 and ETHZ-53 datasets. Experimental results suggest that the proposed descriptor is more effective and robust than ten recent fusion-based descriptors under the content-based image retrieval framework. The computational complexity was also analyzed to give an in-depth evaluation. Furthermore, compared with the state-of-the-art convolutional neural network (CNN)-based descriptors, the proposed descriptor also achieves comparable performance, but does not require any training process.
Application of Rough Sets to Information Retrieval.
ERIC Educational Resources Information Center
Miyamoto, Sadaaki
1998-01-01
Develops a method of rough retrieval, an application of the rough set theory to information retrieval. The aim is to: (1) show that rough sets are naturally applied to information retrieval in which categorized information structure is used; and (2) show that a fuzzy retrieval scheme is induced from the rough retrieval. (AEF)
A scheme for racquet sports video analysis with the combination of audio-visual information
NASA Astrophysics Data System (ADS)
Xing, Liyuan; Ye, Qixiang; Zhang, Weigang; Huang, Qingming; Yu, Hua
2005-07-01
As a very important category in sports video, racquet sports video, e.g. table tennis, tennis and badminton, has been paid little attention in the past years. Considering the characteristics of this kind of sports video, we propose a new scheme for structure indexing and highlight generating based on the combination of audio and visual information. Firstly, a supervised classification method is employed to detect important audio symbols including impact (ball hit), audience cheers, commentator speech, etc. Meanwhile an unsupervised algorithm is proposed to group video shots into various clusters. Then, by taking advantage of temporal relationship between audio and visual signals, we can specify the scene clusters with semantic labels including rally scenes and break scenes. Thirdly, a refinement procedure is developed to reduce false rally scenes by further audio analysis. Finally, an exciting model is proposed to rank the detected rally scenes from which many exciting video clips such as game (match) points can be correctly retrieved. Experiments on two types of representative racquet sports video, table tennis video and tennis video, demonstrate encouraging results.
Characterize Aerosols from MODIS MISR OMI MERRA-2: Dynamic Image Browse Perspective
NASA Technical Reports Server (NTRS)
Wei, Jennifer; Yang, Wenli; Albayrak, Arif; Zhao, Peisheng; Zeng, Jian; Shen, Suhung; Johnson, James; Kempler, Steve
2016-01-01
Among the known atmospheric constituents, aerosols still represent the greatest uncertainty in climate research. To understand the uncertainty is to bring altogether of observational (in-situ and remote sensing) and modeling datasets and inter-compare them synergistically for a wide variety of applications that can bring far-reaching benefits to the science community and the broader society. These benefits can best be achieved if these earth science data (satellite and modeling) are well utilized and interpreted. Unfortunately, this is not always the case, despite the abundance and relative maturity of numerous satellite-borne sensors routinely measure aerosols. There is often disagreement between similar aerosol parameters retrieved from different sensors, leaving users confused as to which sensors to trust for answering important science questions about the distribution, properties, and impacts of aerosols. NASA Goddard Earth Sciences Data and Information Services Center (GES DISC) have developed a new visualization service (NASA Level 2 Data Quality Visualization, DQViz)supporting various visualization and data accessing capabilities from satellite Level 2(MODISMISROMI) and long term assimilated aerosols from NASA Modern-Era Retrospective analysis for Research and Applications, Version 2 (MERRA-2 displaying at their own native physical-retrieved spatial resolution. Functionality will include selecting data sources (e.g., multiple parameters under the same measurement), defining area-of-interest and temporal extents, zooming, panning, overlaying, sliding, and data subsetting and reformatting.
TBIdoc: 3D content-based CT image retrieval system for traumatic brain injury
NASA Astrophysics Data System (ADS)
Li, Shimiao; Gong, Tianxia; Wang, Jie; Liu, Ruizhe; Tan, Chew Lim; Leong, Tze Yun; Pang, Boon Chuan; Lim, C. C. Tchoyoson; Lee, Cheng Kiang; Tian, Qi; Zhang, Zhuo
2010-03-01
Traumatic brain injury (TBI) is a major cause of death and disability. Computed Tomography (CT) scan is widely used in the diagnosis of TBI. Nowadays, large amount of TBI CT data is stacked in the hospital radiology department. Such data and the associated patient information contain valuable information for clinical diagnosis and outcome prediction. However, current hospital database system does not provide an efficient and intuitive tool for doctors to search out cases relevant to the current study case. In this paper, we present the TBIdoc system: a content-based image retrieval (CBIR) system which works on the TBI CT images. In this web-based system, user can query by uploading CT image slices from one study, retrieval result is a list of TBI cases ranked according to their 3D visual similarity to the query case. Specifically, cases of TBI CT images often present diffuse or focal lesions. In TBIdoc system, these pathological image features are represented as bin-based binary feature vectors. We use the Jaccard-Needham measure as the similarity measurement. Based on these, we propose a 3D similarity measure for computing the similarity score between two series of CT slices. nDCG is used to evaluate the system performance, which shows the system produces satisfactory retrieval results. The system is expected to improve the current hospital data management in TBI and to give better support for the clinical decision-making process. It may also contribute to the computer-aided education in TBI.
NLM microcomputer-based tutorials (for microcomputers). Software
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perkins, M.
1990-04-01
The package consists of TOXLEARN--a microcomputer-based training package for TOXLINE (Toxicology Information Online), CHEMLEARN-a microcomputer-based training package for CHEMLINE (Chemical Information Online), MEDTUTOR--a microcomputer-based training package for MEDLINE (Medical Information Online), and ELHILL LEARN--a microcomputer-based training package for the ELHILL search and retrieval software that supports the above-mentioned databases...Software Description: The programs were developed under PILOTplus using the NLM LEARN Programmer. They run on IBM-PC, XT, AT, PS/2, and fully compatible computers. The programs require 512K RAM memory, one disk drive, and DOS 2.0 or higher. The software supports most monochrome, color graphics, enhanced color graphics, or visual graphics displays.
Orme, Elizabeth; Brown, Louise A.; Riby, Leigh M.
2017-01-01
In this study, we examined electrophysiological indices of episodic remembering whilst participants recalled novel shapes, with and without semantic content, within a visual working memory paradigm. The components of interest were the parietal episodic (PE; 400–800 ms) and late posterior negativity (LPN; 500–900 ms), as these have previously been identified as reliable markers of recollection and post-retrieval monitoring, respectively. Fifteen young adults completed a visual matrix patterns task, assessing memory for low and high semantic visual representations. Matrices with either low semantic or high semantic content (containing familiar visual forms) were briefly presented to participants for study (1500 ms), followed by a retention interval (6000 ms) and finally a same/different recognition phase. The event-related potentials of interest were tracked from the onset of the recognition test stimuli. Analyses revealed equivalent amplitude for the earlier PE effect for the processing of both low and high semantic stimulus types. However, the LPN was more negative-going for the processing of the low semantic stimuli. These data are discussed in terms of relatively ‘pure’ and complete retrieval of high semantic items, where support can readily be recruited from semantic memory. However, for the low semantic items additional executive resources, as indexed by the LPN, are recruited when memory monitoring and uncertainty exist in order to recall previously studied items more effectively. PMID:28725203
Orme, Elizabeth; Brown, Louise A; Riby, Leigh M
2017-01-01
In this study, we examined electrophysiological indices of episodic remembering whilst participants recalled novel shapes, with and without semantic content, within a visual working memory paradigm. The components of interest were the parietal episodic (PE; 400-800 ms) and late posterior negativity (LPN; 500-900 ms), as these have previously been identified as reliable markers of recollection and post-retrieval monitoring, respectively. Fifteen young adults completed a visual matrix patterns task, assessing memory for low and high semantic visual representations. Matrices with either low semantic or high semantic content (containing familiar visual forms) were briefly presented to participants for study (1500 ms), followed by a retention interval (6000 ms) and finally a same/different recognition phase. The event-related potentials of interest were tracked from the onset of the recognition test stimuli. Analyses revealed equivalent amplitude for the earlier PE effect for the processing of both low and high semantic stimulus types. However, the LPN was more negative-going for the processing of the low semantic stimuli. These data are discussed in terms of relatively 'pure' and complete retrieval of high semantic items, where support can readily be recruited from semantic memory. However, for the low semantic items additional executive resources, as indexed by the LPN, are recruited when memory monitoring and uncertainty exist in order to recall previously studied items more effectively.
MetaSEEk: a content-based metasearch engine for images
NASA Astrophysics Data System (ADS)
Beigi, Mandis; Benitez, Ana B.; Chang, Shih-Fu
1997-12-01
Search engines are the most powerful resources for finding information on the rapidly expanding World Wide Web (WWW). Finding the desired search engines and learning how to use them, however, can be very time consuming. The integration of such search tools enables the users to access information across the world in a transparent and efficient manner. These systems are called meta-search engines. The recent emergence of visual information retrieval (VIR) search engines on the web is leading to the same efficiency problem. This paper describes and evaluates MetaSEEk, a content-based meta-search engine used for finding images on the Web based on their visual information. MetaSEEk is designed to intelligently select and interface with multiple on-line image search engines by ranking their performance for different classes of user queries. User feedback is also integrated in the ranking refinement. We compare MetaSEEk with a base line version of meta-search engine, which does not use the past performance of the different search engines in recommending target search engines for future queries.
Manfredi, Mirella; Cohn, Neil; Kutas, Marta
2017-06-01
Researchers have long questioned whether information presented through different sensory modalities involves distinct or shared semantic systems. We investigated uni-sensory cross-modal processing by recording event-related brain potentials to words replacing the climactic event in a visual narrative sequence (comics). We compared Onomatopoeic words, which phonetically imitate action sounds (Pow!), with Descriptive words, which describe an action (Punch!), that were (in)congruent within their sequence contexts. Across two experiments, larger N400s appeared to Anomalous Onomatopoeic or Descriptive critical panels than to their congruent counterparts, reflecting a difficulty in semantic access/retrieval. Also, Descriptive words evinced a greater late frontal positivity compared to Onomatopoetic words, suggesting that, though plausible, they may be less predictable/expected in visual narratives. Our results indicate that uni-sensory cross-model integration of word/letter-symbol strings within visual narratives elicit ERP patterns typically observed for written sentence processing, thereby suggesting the engagement of similar domain-independent integration/interpretation mechanisms. Copyright © 2017 Elsevier Inc. All rights reserved.
Refractive Errors Affect the Vividness of Visual Mental Images
Palermo, Liana; Nori, Raffaella; Piccardi, Laura; Zeri, Fabrizio; Babino, Antonio; Giusberti, Fiorella; Guariglia, Cecilia
2013-01-01
The hypothesis that visual perception and mental imagery are equivalent has never been explored in individuals with vision defects not preventing the visual perception of the world, such as refractive errors. Refractive error (i.e., myopia, hyperopia or astigmatism) is a condition where the refracting system of the eye fails to focus objects sharply on the retina. As a consequence refractive errors cause blurred vision. We subdivided 84 individuals according to their spherical equivalent refraction into Emmetropes (control individuals without refractive errors) and Ametropes (individuals with refractive errors). Participants performed a vividness task and completed a questionnaire that explored their cognitive style of thinking before their vision was checked by an ophthalmologist. Although results showed that Ametropes had less vivid mental images than Emmetropes this did not affect the development of their cognitive style of thinking; in fact, Ametropes were able to use both verbal and visual strategies to acquire and retrieve information. Present data are consistent with the hypothesis of equivalence between imagery and perception. PMID:23755186
Manfredi, Mirella; Cohn, Neil; Kutas, Marta
2017-01-01
Researchers have long questioned whether information presented through different sensory modalities involves distinct or shared semantic systems. We investigated uni-sensory cross-modal processing by recording event-related brain potentials to words replacing the climactic event in a visual narrative sequence (comics). We compared Onomatopoeic words, which phonetically imitate action sounds (Pow!), with Descriptive words, which describe an action (Punch!), that were (in)congruent within their sequence contexts. Across two experiments, larger N400s appeared to Anomalous Onomatopoeic or Descriptive critical panels than to their congruent counterparts, reflecting a difficulty in semantic access/retrieval. Also, Descriptive words evinced a greater late frontal positivity compared to Onomatopoetic words, suggesting that, though plausible, they may be less predictable/expected in visual narratives. Our results indicate that uni-sensory cross-model integration of word/letter-symbol strings within visual narratives elicit ERP patterns typically observed for written sentence processing, thereby suggesting the engagement of similar domain-independent integration/interpretation mechanisms. PMID:28242517
Refractive errors affect the vividness of visual mental images.
Palermo, Liana; Nori, Raffaella; Piccardi, Laura; Zeri, Fabrizio; Babino, Antonio; Giusberti, Fiorella; Guariglia, Cecilia
2013-01-01
The hypothesis that visual perception and mental imagery are equivalent has never been explored in individuals with vision defects not preventing the visual perception of the world, such as refractive errors. Refractive error (i.e., myopia, hyperopia or astigmatism) is a condition where the refracting system of the eye fails to focus objects sharply on the retina. As a consequence refractive errors cause blurred vision. We subdivided 84 individuals according to their spherical equivalent refraction into Emmetropes (control individuals without refractive errors) and Ametropes (individuals with refractive errors). Participants performed a vividness task and completed a questionnaire that explored their cognitive style of thinking before their vision was checked by an ophthalmologist. Although results showed that Ametropes had less vivid mental images than Emmetropes this did not affect the development of their cognitive style of thinking; in fact, Ametropes were able to use both verbal and visual strategies to acquire and retrieve information. Present data are consistent with the hypothesis of equivalence between imagery and perception.
Indoor Navigation by People with Visual Impairment Using a Digital Sign System
Legge, Gordon E.; Beckmann, Paul J.; Tjan, Bosco S.; Havey, Gary; Kramer, Kevin; Rolkosky, David; Gage, Rachel; Chen, Muzi; Puchakayala, Sravan; Rangarajan, Aravindhan
2013-01-01
There is a need for adaptive technology to enhance indoor wayfinding by visually-impaired people. To address this need, we have developed and tested a Digital Sign System. The hardware and software consist of digitally-encoded signs widely distributed throughout a building, a handheld sign-reader based on an infrared camera, image-processing software, and a talking digital map running on a mobile device. Four groups of subjects—blind, low vision, blindfolded sighted, and normally sighted controls—were evaluated on three navigation tasks. The results demonstrate that the technology can be used reliably in retrieving information from the signs during active mobility, in finding nearby points of interest, and following routes in a building from a starting location to a destination. The visually impaired subjects accurately and independently completed the navigation tasks, but took substantially longer than normally sighted controls. This fully functional prototype system demonstrates the feasibility of technology enabling independent indoor navigation by people with visual impairment. PMID:24116156
Shah, Mehul A; Agrawal, Rupesh; Teoh, Ryan; Shah, Shreya M; Patel, Kashyap; Gupta, Satyam; Gosai, Siddharth
2017-05-01
To introduce and validate the pediatric ocular trauma score (POTS) - a mathematical model to predict visual outcome trauma in children with traumatic cataract METHODS: In this retrospective cohort study, medical records of consecutive children with traumatic cataracts aged 18 and below were retrieved and analysed. Data collected included age, gender, visual acuity, anterior segment and posterior segment findings, nature of surgery, treatment for amblyopia, follow-up, and final outcome was recorded on a precoded data information sheet. POTS was derived based on the ocular trauma score (OTS), adjusting for age of patient and location of the injury. Visual outcome was predicted using the OTS and the POTS and using receiver operating characteristic (ROC) curves. POTS predicted outcomes were more accurate compared to that of OTS (p = 0.014). POTS is a more sensitive and specific score with more accurate predicted outcomes compared to OTS, and is a viable tool to predict visual outcomes of pediatric ocular trauma with traumatic cataract.
Practical life log video indexing based on content and context
NASA Astrophysics Data System (ADS)
Tancharoen, Datchakorn; Yamasaki, Toshihiko; Aizawa, Kiyoharu
2006-01-01
Today, multimedia information has gained an important role in daily life and people can use imaging devices to capture their visual experiences. In this paper, we present our personal Life Log system to record personal experiences in form of wearable video and environmental data; in addition, an efficient retrieval system is demonstrated to recall the desirable media. We summarize the practical video indexing techniques based on Life Log content and context to detect talking scenes by using audio/visual cues and semantic key frames from GPS data. Voice annotation is also demonstrated as a practical indexing method. Moreover, we apply body media sensors to record continuous life style and use body media data to index the semantic key frames. In the experiments, we demonstrated various video indexing results which provided their semantic contents and showed Life Log visualizations to examine personal life effectively.
Enhanced Information Retrieval Using AJAX
NASA Astrophysics Data System (ADS)
Kachhwaha, Rajendra; Rajvanshi, Nitin
2010-11-01
Information Retrieval deals with the representation, storage, organization of, and access to information items. The representation and organization of information items should provide the user with easy access to the information with the rapid development of Internet, large amounts of digitally stored information is readily available on the World Wide Web. This information is so huge that it becomes increasingly difficult and time consuming for the users to find the information relevant to their needs. The explosive growth of information on the Internet has greatly increased the need for information retrieval systems. However, most of the search engines are using conventional information retrieval systems. An information system needs to implement sophisticated pattern matching tools to determine contents at a faster rate. AJAX has recently emerged as the new tool such the of information retrieval process of information retrieval can become fast and information reaches the use at a faster pace as compared to conventional retrieval systems.
Table Extraction from Web Pages Using Conditional Random Fields to Extract Toponym Related Data
NASA Astrophysics Data System (ADS)
Luthfi Hanifah, Hayyu'; Akbar, Saiful
2017-01-01
Table is one of the ways to visualize information on web pages. The abundant number of web pages that compose the World Wide Web has been the motivation of information extraction and information retrieval research, including the research for table extraction. Besides, there is a need for a system which is designed to specifically handle location-related information. Based on this background, this research is conducted to provide a way to extract location-related data from web tables so that it can be used in the development of Geographic Information Retrieval (GIR) system. The location-related data will be identified by the toponym (location name). In this research, a rule-based approach with gazetteer is used to recognize toponym from web table. Meanwhile, to extract data from a table, a combination of rule-based approach and statistical-based approach is used. On the statistical-based approach, Conditional Random Fields (CRF) model is used to understand the schema of the table. The result of table extraction is presented on JSON format. If a web table contains toponym, a field will be added on the JSON document to store the toponym values. This field can be used to index the table data in accordance to the toponym, which then can be used in the development of GIR system.
3D visualization of molecular structures in the MOGADOC database
NASA Astrophysics Data System (ADS)
Vogt, Natalja; Popov, Evgeny; Rudert, Rainer; Kramer, Rüdiger; Vogt, Jürgen
2010-08-01
The MOGADOC database (Molecular Gas-Phase Documentation) is a powerful tool to retrieve information about compounds which have been studied in the gas-phase by electron diffraction, microwave spectroscopy and molecular radio astronomy. Presently the database contains over 34,500 bibliographic references (from the beginning of each method) for about 10,000 inorganic, organic and organometallic compounds and structural data (bond lengths, bond angles, dihedral angles, etc.) for about 7800 compounds. Most of the implemented molecular structures are given in a three-dimensional (3D) presentation. To create or edit and visualize the 3D images of molecules, new tools (special editor and Java-based 3D applet) were developed. Molecular structures in internal coordinates were converted to those in Cartesian coordinates.
Global-Context Based Salient Region Detection in Nature Images
NASA Astrophysics Data System (ADS)
Bao, Hong; Xu, De; Tang, Yingjun
Visually saliency detection provides an alternative methodology to image description in many applications such as adaptive content delivery and image retrieval. One of the main aims of visual attention in computer vision is to detect and segment the salient regions in an image. In this paper, we employ matrix decomposition to detect salient object in nature images. To efficiently eliminate high contrast noise regions in the background, we integrate global context information into saliency detection. Therefore, the most salient region can be easily selected as the one which is globally most isolated. The proposed approach intrinsically provides an alternative methodology to model attention with low implementation complexity. Experiments show that our approach achieves much better performance than that from the existing state-of-art methods.
Automatic medical image annotation and keyword-based image retrieval using relevance feedback.
Ko, Byoung Chul; Lee, JiHyeon; Nam, Jae-Yeal
2012-08-01
This paper presents novel multiple keywords annotation for medical images, keyword-based medical image retrieval, and relevance feedback method for image retrieval for enhancing image retrieval performance. For semantic keyword annotation, this study proposes a novel medical image classification method combining local wavelet-based center symmetric-local binary patterns with random forests. For keyword-based image retrieval, our retrieval system use the confidence score that is assigned to each annotated keyword by combining probabilities of random forests with predefined body relation graph. To overcome the limitation of keyword-based image retrieval, we combine our image retrieval system with relevance feedback mechanism based on visual feature and pattern classifier. Compared with other annotation and relevance feedback algorithms, the proposed method shows both improved annotation performance and accurate retrieval results.
Altered brain response for semantic knowledge in Alzheimer's disease.
Wierenga, Christina E; Stricker, Nikki H; McCauley, Ashley; Simmons, Alan; Jak, Amy J; Chang, Yu-Ling; Nation, Daniel A; Bangen, Katherine J; Salmon, David P; Bondi, Mark W
2011-02-01
Word retrieval deficits are common in Alzheimer's disease (AD) and are thought to reflect a degradation of semantic memory. Yet, the nature of semantic deterioration in AD and the underlying neural correlates of these semantic memory changes remain largely unknown. We examined the semantic memory impairment in AD by investigating the neural correlates of category knowledge (e.g., living vs. nonliving) and featural processing (global vs. local visual information). During event-related fMRI, 10 adults diagnosed with mild AD and 22 cognitively normal (CN) older adults named aloud items from three categories for which processing of specific visual features has previously been dissociated from categorical features. Results showed widespread group differences in the categorical representation of semantic knowledge in several language-related brain areas. For example, the right inferior frontal gyrus showed selective brain response for nonliving items in the CN group but living items in the AD group. Additionally, the AD group showed increased brain response for word retrieval irrespective of category in Broca's homologue in the right hemisphere and rostral cingulate cortex bilaterally, which suggests greater recruitment of frontally mediated neural compensatory mechanisms in the face of semantic alteration. Copyright © 2010 Elsevier Ltd. All rights reserved.
Exploring the influence of encoding format on subsequent memory.
Turney, Indira C; Dennis, Nancy A; Maillet, David; Rajah, M Natasha
2017-05-01
Distinctive encoding is greatly influenced by gist-based processes and has been shown to suffer when highly similar items are presented in close succession. Thus, elucidating the mechanisms underlying how presentation format affects gist processing is essential in determining the factors that influence these encoding processes. The current study utilised multivariate partial least squares (PLS) analysis to identify encoding networks directly associated with retrieval performance in a blocked and intermixed presentation condition. Subsequent memory analysis for successfully encoded items indicated no significant differences between reaction time and retrieval performance and presentation format. Despite no significant behavioural differences, behaviour PLS revealed differences in brain-behaviour correlations and mean condition activity in brain regions associated with gist-based vs. distinctive encoding. Specifically, the intermixed format encouraged more distinctive encoding, showing increased activation of regions associated with strategy use and visual processing (e.g., frontal and visual cortices, respectively). Alternatively, the blocked format exhibited increased gist-based processes, accompanied by increased activity in the right inferior frontal gyrus. Together, results suggest that the sequence that information is presented during encoding affects the degree to which distinctive encoding is engaged. These findings extend our understanding of the Fuzzy Trace Theory and the role of presentation format on encoding processes.
Vannucci, Manila; Pelagatti, Claudia; Chiorri, Carlo; Mazzoni, Giuliana
2016-01-01
In the present study we examined whether higher levels of object imagery, a stable characteristic that reflects the ability and preference in generating pictorial mental images of objects, facilitate involuntary and voluntary retrieval of autobiographical memories (ABMs). Individuals with high (High-OI) and low (Low-OI) levels of object imagery were asked to perform an involuntary and a voluntary ABM task in the laboratory. Results showed that High-OI participants generated more involuntary and voluntary ABMs than Low-OI, with faster retrieval times. High-OI also reported more detailed memories compared to Low-OI and retrieved memories as visual images. Theoretical implications of these findings for research on voluntary and involuntary ABMs are discussed.
ERP evidence for hemispheric asymmetries in abstract but not exemplar-specific repetition priming.
Küper, Kristina; Liesefeld, Anna M; Zimmer, Hubert D
2015-12-01
Implicit memory retrieval is thought to be exemplar-specific in the right hemisphere (RH) but abstract in the left hemisphere (LH). Yet, conflicting behavioral priming results illustrate that the level at which asymmetries take effect is difficult to pinpoint. In the present divided visual field experiment, we tried to address this issue by analyzing ERPs in addition to behavioral measures. Participants made a natural/artificial decision on lateralized visual objects that were either new, identical repetitions, or different exemplars of studied items. Hemispheric asymmetries did not emerge in either behavioral or late positive complex (LPC) priming effects, but did affect the process of implicit memory retrieval proper as indexed by an early frontal negativity (N350/(F)N400). Whereas exemplar-specific N350/(F)N400 priming effects emerged irrespective of presentation side, abstract implicit memory retrieval of different exemplars was contingent on right visual field presentation and the ensuing initial stimulus processing by the LH. © 2015 Society for Psychophysiological Research.
Kalisman, M; Kalisman, A
1986-07-01
The entire face of modern medical and surgical practice is being significantly affected by the application of technologic developments to the practice of surgery--developments that will tie together such areas as information management and processing, robotics, communication networks, and computerized surgical equipment. The achievements in these areas will create a sophisticated, fully automatic system that will assist the plastic surgeon in many aspects of work, such as regular office activities, doctor-patient interaction, professional updating, communication, and even assistance during the operational process itself. It will be as simple as dialing a telephone today. When it is necessary to consult with other colleagues, a combined vocal and visual consulting network in other medical centers as well as consulting computerized expert systems will be available all day and night as part of the communication services. The plastic surgical expert systems will store valuable information, based on the knowledge of the best human experts, on any important subtopics and will be accessed in a very friendly way. This will be an invaluable tool for the residents in training, for emergency room work, and for just getting a second opinion, even for the more experienced practitioner. All the electronic mail, professional magazines, and any other required professional information will flow between central and personal retrieval systems. The doctor, at a desired time in the privacy and comfort of his or her own home or office, can read the mail, make required changes to suit his or her needs, and store, send back, or distribute information, all in a speedy and efficient manner. The simulation of a planned surgery will give the surgeon the ability to prepare and will prevent difficulties during complicated procedures through the luxury of a dry run, without any sequelae if certain expected outcomes fail to materialize. The preprogrammed control of sophisticated surgical equipment and the use of robotics would generate new operational possibilities for more complicated surgeries, which are now prevented owing to the surgeon's physical limitations. Information urgently required during the operation as a result of an unexpected situation will be available immediately from storage and retrieval systems, and real-time vocal and visual consulting with expert colleagues, often in remote locations, will bring the operations process itself to a new era.(ABSTRACT TRUNCATED AT 400 WORDS)
Douyère, Magaly; Soualmia, Lina F; Névéol, Aurélie; Rogozan, Alexandrina; Dahamna, Badisse; Leroy, Jean-Philippe; Thirion, Benoît; Darmoni, Stefan J
2004-12-01
The amount of health information available on the Internet is considerable. In this context, several health gateways have been developed. Among them, CISMeF (Catalogue and Index of Health Resources in French) was designed to catalogue and index health resources in French. The goal of this article is to describe the various enhancements to the MeSH thesaurus developed by the CISMeF team to adapt this terminology to the broader field of health Internet resources instead of scientific articles for the medline bibliographic database. CISMeF uses two standard tools for organizing information: the MeSH thesaurus and several metadata element sets, in particular the Dublin Core metadata format. The heterogeneity of Internet health resources led the CISMeF team to enhance the MeSH thesaurus with the introduction of two new concepts, respectively, resource types and metaterms. CISMeF resource types are a generalization of the publication types of medline. A resource type describes the nature of the resource and MeSH keyword/qualifier pairs describe the subject of the resource. A metaterm is generally a medical specialty or a biological science, which has semantic links with one or more MeSH keywords, qualifiers and resource types. The CISMeF terminology is exploited for several tasks: resource indexing performed manually, resource categorization performed automatically, visualization and navigation through the concept hierarchies and information retrieval using the Doc'CISMeF search engine. The CISMeF health gateway uses several MeSH thesaurus enhancements to optimize information retrieval, hierarchy navigation and automatic indexing.
NASA Astrophysics Data System (ADS)
Yamazaki, Towako
In order to stabilize and improve quality of information retrieval service, the information retrieval team of Daicel Corporation has given some efforts on standard operating procedures, interview sheet for information retrieval, structured format for search report, and search expressions for some technological fields of Daicel. These activities and efforts will also lead to skill sharing and skill tradition between searchers. In addition, skill improvements are needed not only for a searcher individually, but also for the information retrieval team totally when playing searcher's new roles.
Image Retrieval by Color Semantics with Incomplete Knowledge.
ERIC Educational Resources Information Center
Corridoni, Jacopo M.; Del Bimbo, Alberto; Vicario, Enrico
1998-01-01
Presents a system which supports image retrieval by high-level chromatic contents, the sensations that color accordances generate on the observer. Surveys Itten's theory of color semantics and discusses image description and query specification. Presents examples of visual querying. (AEF)
Shahzad, Aamir; Landry, René; Lee, Malrey; Xiong, Naixue; Lee, Jongho; Lee, Changhoon
2016-01-01
Substantial changes have occurred in the Information Technology (IT) sectors and with these changes, the demand for remote access to field sensor information has increased. This allows visualization, monitoring, and control through various electronic devices, such as laptops, tablets, i-Pads, PCs, and cellular phones. The smart phone is considered as a more reliable, faster and efficient device to access and monitor industrial systems and their corresponding information interfaces anywhere and anytime. This study describes the deployment of a protocol whereby industrial system information can be securely accessed by cellular phones via a Supervisory Control And Data Acquisition (SCADA) server. To achieve the study goals, proprietary protocol interconnectivity with non-proprietary protocols and the usage of interconnectivity services are considered in detail. They support the visualization of the SCADA system information, and the related operations through smart phones. The intelligent sensors are configured and designated to process real information via cellular phones by employing information exchange services between the proprietary protocol and non-proprietary protocols. SCADA cellular access raises the issue of security flaws. For these challenges, a cryptography-based security method is considered and deployed, and it could be considered as a part of a proprietary protocol. Subsequently, transmission flows from the smart phones through a cellular network. PMID:27314351
Shahzad, Aamir; Landry, René; Lee, Malrey; Xiong, Naixue; Lee, Jongho; Lee, Changhoon
2016-06-14
Substantial changes have occurred in the Information Technology (IT) sectors and with these changes, the demand for remote access to field sensor information has increased. This allows visualization, monitoring, and control through various electronic devices, such as laptops, tablets, i-Pads, PCs, and cellular phones. The smart phone is considered as a more reliable, faster and efficient device to access and monitor industrial systems and their corresponding information interfaces anywhere and anytime. This study describes the deployment of a protocol whereby industrial system information can be securely accessed by cellular phones via a Supervisory Control And Data Acquisition (SCADA) server. To achieve the study goals, proprietary protocol interconnectivity with non-proprietary protocols and the usage of interconnectivity services are considered in detail. They support the visualization of the SCADA system information, and the related operations through smart phones. The intelligent sensors are configured and designated to process real information via cellular phones by employing information exchange services between the proprietary protocol and non-proprietary protocols. SCADA cellular access raises the issue of security flaws. For these challenges, a cryptography-based security method is considered and deployed, and it could be considered as a part of a proprietary protocol. Subsequently, transmission flows from the smart phones through a cellular network.
Most people do not ignore salient invalid cues in memory-based decisions.
Platzer, Christine; Bröder, Arndt
2012-08-01
Former experimental studies have shown that decisions from memory tend to rely only on a few cues, following simple noncompensatory heuristics like "take the best." However, it has also repeatedly been demonstrated that a pictorial, as opposed to a verbal, representation of cue information fosters the inclusion of more cues in compensatory strategies, suggesting a facilitated retrieval of cue patterns. These studies did not properly control for visual salience of cues, however. In the experiment reported here, the cue salience hierarchy established in a pilot study was either congruent or incongruent with the validity order of the cues. Only the latter condition increased compensatory decision making, suggesting that the apparent representational format effect is, rather, a salience effect: Participants automatically retrieve and incorporate salient cues irrespective of their validity. Results are discussed with respect to reaction time data.
Kamel Boulos, Maged N; Roudsari, Abdul V; Carso N, Ewart R
2002-12-01
HealthCyberMap (HCM-http://healthcybermap.semanticweb.org) is a web-based service for healthcare professionals and librarians, patients and the public in general that aims at mapping parts of the health information resources in cyberspace in novel ways to improve their retrieval and navigation. HCM adopts a clinical metadata framework built upon a clinical coding ontology for the semantic indexing, classification and browsing of Internet health information resources. A resource metadata base holds information about selected resources. HCM then uses GIS (Geographic Information Systems) spatialization methods to generate interactive navigational cybermaps from the metadata base. These visual cybermaps are based on familiar medical metaphors. HCM cybermaps can be considered as semantically spatialized, ontology-based browsing views of the underlying resource metadata base. Using a clinical coding scheme as a metric for spatialization ('semantic distance') is unique to HCM and is very much suited for the semantic categorization and navigation of Internet health information resources. Clinical codes ensure reliable and unambiguous topical indexing of these resources. HCM also introduces a useful form of cyberspatial analysis for the detection of topical coverage gaps in the resource metadata base using choropleth (shaded) maps of human body systems.
Roldan, Stephanie M
2017-01-01
One of the fundamental goals of object recognition research is to understand how a cognitive representation produced from the output of filtered and transformed sensory information facilitates efficient viewer behavior. Given that mental imagery strongly resembles perceptual processes in both cortical regions and subjective visual qualities, it is reasonable to question whether mental imagery facilitates cognition in a manner similar to that of perceptual viewing: via the detection and recognition of distinguishing features. Categorizing the feature content of mental imagery holds potential as a reverse pathway by which to identify the components of a visual stimulus which are most critical for the creation and retrieval of a visual representation. This review will examine the likelihood that the information represented in visual mental imagery reflects distinctive object features thought to facilitate efficient object categorization and recognition during perceptual viewing. If it is the case that these representational features resemble their sensory counterparts in both spatial and semantic qualities, they may well be accessible through mental imagery as evaluated through current investigative techniques. In this review, methods applied to mental imagery research and their findings are reviewed and evaluated for their efficiency in accessing internal representations, and implications for identifying diagnostic features are discussed. An argument is made for the benefits of combining mental imagery assessment methods with diagnostic feature research to advance the understanding of visual perceptive processes, with suggestions for avenues of future investigation.
Roldan, Stephanie M.
2017-01-01
One of the fundamental goals of object recognition research is to understand how a cognitive representation produced from the output of filtered and transformed sensory information facilitates efficient viewer behavior. Given that mental imagery strongly resembles perceptual processes in both cortical regions and subjective visual qualities, it is reasonable to question whether mental imagery facilitates cognition in a manner similar to that of perceptual viewing: via the detection and recognition of distinguishing features. Categorizing the feature content of mental imagery holds potential as a reverse pathway by which to identify the components of a visual stimulus which are most critical for the creation and retrieval of a visual representation. This review will examine the likelihood that the information represented in visual mental imagery reflects distinctive object features thought to facilitate efficient object categorization and recognition during perceptual viewing. If it is the case that these representational features resemble their sensory counterparts in both spatial and semantic qualities, they may well be accessible through mental imagery as evaluated through current investigative techniques. In this review, methods applied to mental imagery research and their findings are reviewed and evaluated for their efficiency in accessing internal representations, and implications for identifying diagnostic features are discussed. An argument is made for the benefits of combining mental imagery assessment methods with diagnostic feature research to advance the understanding of visual perceptive processes, with suggestions for avenues of future investigation. PMID:28588538
Learning to rank using user clicks and visual features for image retrieval.
Yu, Jun; Tao, Dacheng; Wang, Meng; Rui, Yong
2015-04-01
The inconsistency between textual features and visual contents can cause poor image search results. To solve this problem, click features, which are more reliable than textual information in justifying the relevance between a query and clicked images, are adopted in image ranking model. However, the existing ranking model cannot integrate visual features, which are efficient in refining the click-based search results. In this paper, we propose a novel ranking model based on the learning to rank framework. Visual features and click features are simultaneously utilized to obtain the ranking model. Specifically, the proposed approach is based on large margin structured output learning and the visual consistency is integrated with the click features through a hypergraph regularizer term. In accordance with the fast alternating linearization method, we design a novel algorithm to optimize the objective function. This algorithm alternately minimizes two different approximations of the original objective function by keeping one function unchanged and linearizing the other. We conduct experiments on a large-scale dataset collected from the Microsoft Bing image search engine, and the results demonstrate that the proposed learning to rank models based on visual features and user clicks outperforms state-of-the-art algorithms.
Connectionist Interaction Information Retrieval.
ERIC Educational Resources Information Center
Dominich, Sandor
2003-01-01
Discussion of connectionist views for adaptive clustering in information retrieval focuses on a connectionist clustering technique and activation spreading-based information retrieval model using the interaction information retrieval method. Presents theoretical as well as simulation results as regards computational complexity and includes…
Task modulates functional connectivity networks in free viewing behavior.
Seidkhani, Hossein; Nikolaev, Andrey R; Meghanathan, Radha Nila; Pezeshk, Hamid; Masoudi-Nejad, Ali; van Leeuwen, Cees
2017-10-01
In free visual exploration, eye-movement is immediately followed by dynamic reconfiguration of brain functional connectivity. We studied the task-dependency of this process in a combined visual search-change detection experiment. Participants viewed two (nearly) same displays in succession. First time they had to find and remember multiple targets among distractors, so the ongoing task involved memory encoding. Second time they had to determine if a target had changed in orientation, so the ongoing task involved memory retrieval. From multichannel EEG recorded during 200 ms intervals time-locked to fixation onsets, we estimated the functional connectivity using a weighted phase lag index at the frequencies of theta, alpha, and beta bands, and derived global and local measures of the functional connectivity graphs. We found differences between both memory task conditions for several network measures, such as mean path length, radius, diameter, closeness and eccentricity, mainly in the alpha band. Both the local and the global measures indicated that encoding involved a more segregated mode of operation than retrieval. These differences arose immediately after fixation onset and persisted for the entire duration of the lambda complex, an evoked potential commonly associated with early visual perception. We concluded that encoding and retrieval differentially shape network configurations involved in early visual perception, affecting the way the visual input is processed at each fixation. These findings demonstrate that task requirements dynamically control the functional connectivity networks involved in early visual perception. Copyright © 2017 Elsevier Inc. All rights reserved.
Overview of EVE - the event visualization environment of ROOT
NASA Astrophysics Data System (ADS)
Tadel, Matevž
2010-04-01
EVE is a high-level visualization library using ROOT's data-processing, GUI and OpenGL interfaces. It is designed as a framework for object management offering hierarchical data organization, object interaction and visualization via GUI and OpenGL representations. Automatic creation of 2D projected views is also supported. On the other hand, it can serve as an event visualization toolkit satisfying most HEP requirements: visualization of geometry, simulated and reconstructed data such as hits, clusters, tracks and calorimeter information. Special classes are available for visualization of raw-data. Object-interaction layer allows for easy selection and highlighting of objects and their derived representations (projections) across several views (3D, Rho-Z, R-Phi). Object-specific tooltips are provided in both GUI and GL views. The visual-configuration layer of EVE is built around a data-base of template objects that can be applied to specific instances of visualization objects to ensure consistent object presentation. The data-base can be retrieved from a file, edited during the framework operation and stored to file. EVE prototype was developed within the ALICE collaboration and has been included into ROOT in December 2007. Since then all EVE components have reached maturity. EVE is used as the base of AliEve visualization framework in ALICE, Firework physics-oriented event-display in CMS, and as the visualization engine of FairRoot in FAIR.
Drijvers, Linda; Özyürek, Asli; Jensen, Ole
2018-06-19
Previous work revealed that visual semantic information conveyed by gestures can enhance degraded speech comprehension, but the mechanisms underlying these integration processes under adverse listening conditions remain poorly understood. We used MEG to investigate how oscillatory dynamics support speech-gesture integration when integration load is manipulated by auditory (e.g., speech degradation) and visual semantic (e.g., gesture congruency) factors. Participants were presented with videos of an actress uttering an action verb in clear or degraded speech, accompanied by a matching (mixing gesture + "mixing") or mismatching (drinking gesture + "walking") gesture. In clear speech, alpha/beta power was more suppressed in the left inferior frontal gyrus and motor and visual cortices when integration load increased in response to mismatching versus matching gestures. In degraded speech, beta power was less suppressed over posterior STS and medial temporal lobe for mismatching compared with matching gestures, showing that integration load was lowest when speech was degraded and mismatching gestures could not be integrated and disambiguate the degraded signal. Our results thus provide novel insights on how low-frequency oscillatory modulations in different parts of the cortex support the semantic audiovisual integration of gestures in clear and degraded speech: When speech is clear, the left inferior frontal gyrus and motor and visual cortices engage because higher-level semantic information increases semantic integration load. When speech is degraded, posterior STS/middle temporal gyrus and medial temporal lobe are less engaged because integration load is lowest when visual semantic information does not aid lexical retrieval and speech and gestures cannot be integrated.
Results from Evaluations of Gridded CrIS/ATMS Visualization for Operational Forecasting
NASA Astrophysics Data System (ADS)
Stevens, E.; Zavodsky, B.; Dostalek, J.; Berndt, E.; Hoese, D.; White, K.; Bowlan, M.; Gambacorta, A.; Wheeler, A.; Haisley, C.; Smith, N.
2017-12-01
For forecast challenges which require diagnosis of the three-dimensional atmosphere, current observations, such as radiosondes, may not offer enough information. Satellite data can help fill the spatial and temporal gaps between soundings. In particular, temperature and moisture retrievals from the NOAA-Unique Combined Atmospheric Processing System (NUCAPS), which combines infrared soundings from the Cross-track Infrared Sounder (CrIS) with the Advanced Technology Microwave Sounder (ATMS) to retrieve profiles of temperature and moisture. NUCAPS retrievals are available in a wide swath with approximately 45-km spatial resolution at nadir and a local Equator crossing time of 1:30 A.M./P.M. enabling three-dimensional observations at asynoptic times. This abstract focuses on evaluation of a new visualization for NUCAPS within the operational National Weather Service Advanced Weather Interactive Processing System (AWIPS) decision support system that allows these data to be viewed in gridded horizontal maps or vertical cross sections. Two testbed evaluations have occurred in 2017: a Cold Air Aloft (CAA) evaluation at the Alaska Center Weather Service Unit and a Convective Potential evaluation at the NOAA Hazardous Weather Testbed. For CAA, at high latitudes during the winter months, the air at altitudes used by passenger and cargo aircraft can reach temperatures cold enough (-65°C) to begin to freeze jet fuel, and Gridded NUCAPS visualization was shown to help fill in the spatial and temporal gaps in data-sparse areas across the Alaskan airspace by identifying the 3D spatial extent of cold air features. For convective potential, understanding the vertical distribution of temperature and moisture is also very important for forecasting the potential for convection related to severe weather such as lightning, large hail, and tornadoes. The Gridded NUCAPS visualization was shown to aid forecasters in understanding temperature and moisture characteristics at critical levels for determining cap strength and instability. In both cases, when the products are used in conjunction with numerical output to reinforce confidence in model products or provide an alternative observation if forecasters are not sure the model is properly representing the atmosphere.
2016-01-01
We investigated whether intentional forgetting impacts only the likelihood of later retrieval from long-term memory or whether it also impacts the fidelity of those representations that are successfully retrieved. We accomplished this by combining an item-method directed forgetting task with a testing procedure and modeling approach inspired by the delayed-estimation paradigm used in the study of visual short-term memory (STM). Abstract or concrete colored images were each followed by a remember (R) or forget (F) instruction and sometimes by a visual probe requiring a speeded detection response (E1–E3). Memory was tested using an old–new (E1–E2) or remember-know-no (E3) recognition task followed by a continuous color judgment task (E2–E3); a final experiment included only the color judgment task (E4). Replicating the existing literature, more “old” or “remember” responses were made to R than F items and RTs to postinstruction visual probes were longer following F than R instructions. Color judgments were more accurate for successfully recognized or recollected R than F items (E2–E3); a mixture model confirmed a decrease to both the probability of retrieving the F items as well as the fidelity of the representation of those F items that were retrieved (E4). We conclude that intentional forgetting is an effortful process that not only reduces the likelihood of successfully encoding an item for later retrieval, but also produces an impoverished memory trace even when those items are retrieved; these findings draw a parallel between the control of memory representations within working and long-term memory. PMID:26709589
Multiple-object tracking as a tool for parametrically modulating memory reactivation
Poppenk, J.; Norman, K.A.
2017-01-01
Converging evidence supports the “non-monotonic plasticity” hypothesis that although complete retrieval may strengthen memories, partial retrieval weakens them. Yet, the classic experimental paradigms used to study effects of partial retrieval are not ideally suited to doing so, because they lack the parametric control needed to ensure that the memory is activated to the appropriate degree (i.e., that there is some retrieval, but not enough to cause memory strengthening). Here we present a novel procedure designed to accommodate this need. After participants learned a list of word-scene associates, they completed a cued mental visualization task that was combined with a multiple-object tracking (MOT) procedure, which we selected for its ability to interfere with mental visualization in a parametrically adjustable way (by varying the number of MOT targets). We also used fMRI data to successfully train an “associative recall” classifier for use in this task: this classifier revealed greater memory reactivation during trials in which associative memories were cued while participants tracked one, rather than five MOT targets. However, the classifier was insensitive to task difficulty when recall was not taking place, suggesting it had indeed tracked memory reactivation rather than task difficulty per se. Consistent with the classifier findings, participants’ introspective ratings of visualization vividness were modulated by MOT task difficulty. In addition, we observed reduced classifier output and slowing of responses in a post-reactivation memory test, consistent with the hypothesis that partial reactivation, induced by MOT, weakened memory. These results serve as a “proof of concept” that MOT can be used to parametrically modulate memory retrieval – a property that may prove useful in future investigation of partial retrieval effects, e.g., in closed-loop experiments. PMID:28387587
Fawcett, Jonathan M; Lawrence, Michael A; Taylor, Tracy L
2016-01-01
We investigated whether intentional forgetting impacts only the likelihood of later retrieval from long-term memory or whether it also impacts the fidelity of those representations that are successfully retrieved. We accomplished this by combining an item-method directed forgetting task with a testing procedure and modeling approach inspired by the delayed-estimation paradigm used in the study of visual short-term memory (STM). Abstract or concrete colored images were each followed by a remember (R) or forget (F) instruction and sometimes by a visual probe requiring a speeded detection response (E1-E3). Memory was tested using an old-new (E1-E2) or remember-know-no (E3) recognition task followed by a continuous color judgment task (E2-E3); a final experiment included only the color judgment task (E4). Replicating the existing literature, more "old" or "remember" responses were made to R than F items and RTs to postinstruction visual probes were longer following F than R instructions. Color judgments were more accurate for successfully recognized or recollected R than F items (E2-E3); a mixture model confirmed a decrease to both the probability of retrieving the F items as well as the fidelity of the representation of those F items that were retrieved (E4). We conclude that intentional forgetting is an effortful process that not only reduces the likelihood of successfully encoding an item for later retrieval, but also produces an impoverished memory trace even when those items are retrieved; these findings draw a parallel between the control of memory representations within working and long-term memory. (c) 2015 APA, all rights reserved).
Saddiki, Najat; Hennion, Sophie; Viard, Romain; Ramdane, Nassima; Lopes, Renaud; Baroncini, Marc; Szurhaj, William; Reyns, Nicolas; Pruvo, Jean Pierre; Delmaire, Christine
2018-05-01
Medial lobe temporal structures and more specifically the hippocampus play a decisive role in episodic memory. Most of the memory functional magnetic resonance imaging (fMRI) studies evaluate the encoding phase; the retrieval phase being performed outside the MRI. We aimed to determine the ability to reveal greater hippocampal fMRI activations during retrieval phase. Thirty-five epileptic patients underwent a two-step memory fMRI. During encoding phase, subjects were requested to identify the feminine or masculine gender of faces and words presented, in order to encourage stimulus encoding. One hour after, during retrieval phase, subjects had to recognize the word and face. We used an event-related design to identify hippocampal activations. There was no significant difference between patients with left temporal lobe epilepsy, patients with right temporal lobe epilepsy and patients with extratemporal lobe epilepsy on verbal and visual learning task. For words, patients demonstrated significantly more bilateral hippocampal activation for retrieval task than encoding task and when the tasks were associated than during encoding alone. Significant difference was seen between face-encoding alone and face retrieval alone. This study demonstrates the essential contribution of the retrieval task during a fMRI memory task but the number of patients with hippocampal activations was greater when the two tasks were taken into account. Copyright © 2018. Published by Elsevier Masson SAS.
Content-based management service for medical videos.
Mendi, Engin; Bayrak, Coskun; Cecen, Songul; Ermisoglu, Emre
2013-01-01
Development of health information technology has had a dramatic impact to improve the efficiency and quality of medical care. Developing interoperable health information systems for healthcare providers has the potential to improve the quality and equitability of patient-centered healthcare. In this article, we describe an automated content-based medical video analysis and management service that provides convenience and ease in accessing the relevant medical video content without sequential scanning. The system facilitates effective temporal video segmentation and content-based visual information retrieval that enable a more reliable understanding of medical video content. The system is implemented as a Web- and mobile-based service and has the potential to offer a knowledge-sharing platform for the purpose of efficient medical video content access.
2011-01-01
Background Fatigue is a common complaint among elementary and junior high school students, and is known to be associated with reduced academic performance. Recently, we demonstrated that fatigue was correlated with decreased cognitive function in these students. However, no studies have identified cognitive predictors of fatigue. Therefore, we attempted to determine independent cognitive predictors of fatigue in these students. Methods We performed a prospective cohort study. One hundred and forty-two elementary and junior high school students without fatigue participated. They completed a variety of paper-and-pencil tests, including list learning and list recall tests, kana pick-out test, semantic fluency test, figure copying test, digit span forward test, and symbol digit modalities test. The participants also completed computerized cognitive tests (tasks A to E on the modified advanced trail making test). These cognitive tests were used to evaluate motor- and information-processing speed, immediate and delayed memory function, auditory and visual attention, divided and switching attention, retrieval of learned material, and spatial construction. One year after the tests, a questionnaire about fatigue (Japanese version of the Chalder Fatigue Scale) was administered to all the participants. Results After the follow-up period, we confirmed 40 cases of fatigue among 118 students. In multivariate logistic regression analyses adjusted for grades and gender, poorer performance on visual information-processing speed and attention tasks was associated with increased risk of fatigue. Conclusions Reduced visual information-processing speed and poor attention are independent predictors of fatigue in elementary and junior high school students. PMID:21672212
Mizuno, Kei; Tanaka, Masaaki; Fukuda, Sanae; Yamano, Emi; Shigihara, Yoshihito; Imai-Matsumura, Kyoko; Watanabe, Yasuyoshi
2011-06-14
Fatigue is a common complaint among elementary and junior high school students, and is known to be associated with reduced academic performance. Recently, we demonstrated that fatigue was correlated with decreased cognitive function in these students. However, no studies have identified cognitive predictors of fatigue. Therefore, we attempted to determine independent cognitive predictors of fatigue in these students. We performed a prospective cohort study. One hundred and forty-two elementary and junior high school students without fatigue participated. They completed a variety of paper-and-pencil tests, including list learning and list recall tests, kana pick-out test, semantic fluency test, figure copying test, digit span forward test, and symbol digit modalities test. The participants also completed computerized cognitive tests (tasks A to E on the modified advanced trail making test). These cognitive tests were used to evaluate motor- and information-processing speed, immediate and delayed memory function, auditory and visual attention, divided and switching attention, retrieval of learned material, and spatial construction. One year after the tests, a questionnaire about fatigue (Japanese version of the Chalder Fatigue Scale) was administered to all the participants. After the follow-up period, we confirmed 40 cases of fatigue among 118 students. In multivariate logistic regression analyses adjusted for grades and gender, poorer performance on visual information-processing speed and attention tasks was associated with increased risk of fatigue. Reduced visual information-processing speed and poor attention are independent predictors of fatigue in elementary and junior high school students. © 2011 Mizuno et al; licensee BioMed Central Ltd.
Topological Aspects of Information Retrieval.
ERIC Educational Resources Information Center
Egghe, Leo; Rousseau, Ronald
1998-01-01
Discusses topological aspects of theoretical information retrieval, including retrieval topology; similarity topology; pseudo-metric topology; document spaces as topological spaces; Boolean information retrieval as a subsystem of any topological system; and proofs of theorems. (LRW)
Measurement of tag confidence in user generated contents retrieval
NASA Astrophysics Data System (ADS)
Lee, Sihyoung; Min, Hyun-Seok; Lee, Young Bok; Ro, Yong Man
2009-01-01
As online image sharing services are becoming popular, the importance of correctly annotated tags is being emphasized for precise search and retrieval. Tags created by user along with user-generated contents (UGC) are often ambiguous due to the fact that some tags are highly subjective and visually unrelated to the image. They cause unwanted results to users when image search engines rely on tags. In this paper, we propose a method of measuring tag confidence so that one can differentiate confidence tags from noisy tags. The proposed tag confidence is measured from visual semantics of the image. To verify the usefulness of the proposed method, experiments were performed with UGC database from social network sites. Experimental results showed that the image retrieval performance with confidence tags was increased.
Information Retrieval in Biomedical Research: From Articles to Datasets
ERIC Educational Resources Information Center
Wei, Wei
2017-01-01
Information retrieval techniques have been applied to biomedical research for a variety of purposes, such as textual document retrieval and molecular data retrieval. As biomedical research evolves over time, information retrieval is also constantly facing new challenges, including the growing number of available data, the emerging new data types,…
NASA Astrophysics Data System (ADS)
Bell, A.; Tang, G.; Yang, P.; Wu, D.
2017-12-01
Due to their high spatial and temporal coverage, cirrus clouds have a profound role in regulating the Earth's energy budget. Variability of their radiative, geometric, and microphysical properties can pose significant uncertainties in global climate model simulations if not adequately constrained. Thus, the development of retrieval methodologies able to accurately retrieve ice cloud properties and present associated uncertainties is essential. The effectiveness of cirrus cloud retrievals relies on accurate a priori understanding of ice radiative properties, as well as the current state of the atmosphere. Current studies have implemented information content theory analyses prior to retrievals to quantify the amount of information that should be expected on parameters to be retrieved, as well as the relative contribution of information provided by certain measurement channels. Through this analysis, retrieval algorithms can be designed in a way to maximize the information in measurements, and therefore ensure enough information is present to retrieve ice cloud properties. In this study, we present such an information content analysis to quantify the amount of information to be expected in retrievals of cirrus ice water path and particle effective diameter using sub-millimeter and thermal infrared radiometry. Preliminary results show these bands to be sensitive to changes in ice water path and effective diameter, and thus lend confidence their ability to simultaneously retrieve these parameters. Further quantification of sensitivity and the information provided from these bands can then be used to design and optimal retrieval scheme. While this information content analysis is employed on a theoretical retrieval combining simulated radiance measurements, the methodology could in general be applicable to any instrument or retrieval approach.
Integrating In Silico Resources to Map a Signaling Network
Liu, Hanqing; Beck, Tim N.; Golemis, Erica A.; Serebriiskii, Ilya G.
2013-01-01
The abundance of publicly available life science databases offer a wealth of information that can support interpretation of experimentally derived data and greatly enhance hypothesis generation. Protein interaction and functional networks are not simply new renditions of existing data: they provide the opportunity to gain insights into the specific physical and functional role a protein plays as part of the biological system. In this chapter, we describe different in silico tools that can quickly and conveniently retrieve data from existing data repositories and discuss how the available tools are best utilized for different purposes. While emphasizing protein-protein interaction databases (e.g., BioGrid and IntAct), we also introduce metasearch platforms such as STRING and GeneMANIA, pathway databases (e.g., BioCarta and Pathway Commons), text mining approaches (e.g., PubMed and Chilibot), and resources for drug-protein interactions, genetic information for model organisms and gene expression information based on microarray data mining. Furthermore, we provide a simple step-by-step protocol to building customized protein-protein interaction networks in Cytoscape, a powerful network assembly and visualization program, integrating data retrieved from these various databases. As we illustrate, generation of composite interaction networks enables investigators to extract significantly more information about a given biological system than utilization of a single database or sole reliance on primary literature. PMID:24233784
Intelligent distributed medical image management
NASA Astrophysics Data System (ADS)
Garcia, Hong-Mei C.; Yun, David Y.
1995-05-01
The rapid advancements in high performance global communication have accelerated cooperative image-based medical services to a new frontier. Traditional image-based medical services such as radiology and diagnostic consultation can now fully utilize multimedia technologies in order to provide novel services, including remote cooperative medical triage, distributed virtual simulation of operations, as well as cross-country collaborative medical research and training. Fast (efficient) and easy (flexible) retrieval of relevant images remains a critical requirement for the provision of remote medical services. This paper describes the database system requirements, identifies technological building blocks for meeting the requirements, and presents a system architecture for our target image database system, MISSION-DBS, which has been designed to fulfill the goals of Project MISSION (medical imaging support via satellite integrated optical network) -- an experimental high performance gigabit satellite communication network with access to remote supercomputing power, medical image databases, and 3D visualization capabilities in addition to medical expertise anywhere and anytime around the country. The MISSION-DBS design employs a synergistic fusion of techniques in distributed databases (DDB) and artificial intelligence (AI) for storing, migrating, accessing, and exploring images. The efficient storage and retrieval of voluminous image information is achieved by integrating DDB modeling and AI techniques for image processing while the flexible retrieval mechanisms are accomplished by combining attribute- based and content-based retrievals.
Lloyd-Jones, Toby J; Nakabayashi, Kazuyo
2014-01-01
Using a novel paradigm to engage the long-term mappings between object names and the prototypical colors for objects, we investigated the retrieval of object-color knowledge as indexed by long-term priming (the benefit in performance from a prior encounter with the same or a similar stimulus); a process about which little is known. We examined priming from object naming on a lexical-semantic matching task. In the matching task participants encountered a visually presented object name (Experiment 1) or object shape (Experiment 2) paired with either a color patch or color name. The pairings could either match whereby both were consistent with a familiar object (e.g., strawberry and red) or mismatch (strawberry and blue). We used the matching task to probe knowledge about familiar objects and their colors pre-activated during object naming. In particular, we examined whether the retrieval of object-color information was modality-specific and whether this influenced priming. Priming varied with the nature of the retrieval process: object-color priming arose for object names but not object shapes and beneficial effects of priming were observed for color patches whereas inhibitory priming arose with color names. These findings have implications for understanding how object knowledge is retrieved from memory and modified by learning.
15 CFR 950.9 - Computerized Environmental Data and Information Retrieval Service.
Code of Federal Regulations, 2012 CFR
2012-01-01
... Information Retrieval Service. 950.9 Section 950.9 Commerce and Foreign Trade Regulations Relating to Commerce... Computerized Environmental Data and Information Retrieval Service. The Environmental Data Index (ENDEX... computerized, information retrieval service provides a parallel subject-author-abstract referral service. A...
15 CFR 950.9 - Computerized Environmental Data and Information Retrieval Service.
Code of Federal Regulations, 2013 CFR
2013-01-01
... Information Retrieval Service. 950.9 Section 950.9 Commerce and Foreign Trade Regulations Relating to Commerce... Computerized Environmental Data and Information Retrieval Service. The Environmental Data Index (ENDEX... computerized, information retrieval service provides a parallel subject-author-abstract referral service. A...
15 CFR 950.9 - Computerized Environmental Data and Information Retrieval Service.
Code of Federal Regulations, 2010 CFR
2010-01-01
... Information Retrieval Service. 950.9 Section 950.9 Commerce and Foreign Trade Regulations Relating to Commerce... Computerized Environmental Data and Information Retrieval Service. The Environmental Data Index (ENDEX... computerized, information retrieval service provides a parallel subject-author-abstract referral service. A...
15 CFR 950.9 - Computerized Environmental Data and Information Retrieval Service.
Code of Federal Regulations, 2011 CFR
2011-01-01
... Information Retrieval Service. 950.9 Section 950.9 Commerce and Foreign Trade Regulations Relating to Commerce... Computerized Environmental Data and Information Retrieval Service. The Environmental Data Index (ENDEX... computerized, information retrieval service provides a parallel subject-author-abstract referral service. A...
15 CFR 950.9 - Computerized Environmental Data and Information Retrieval Service.
Code of Federal Regulations, 2014 CFR
2014-01-01
... Information Retrieval Service. 950.9 Section 950.9 Commerce and Foreign Trade Regulations Relating to Commerce... Computerized Environmental Data and Information Retrieval Service. The Environmental Data Index (ENDEX... computerized, information retrieval service provides a parallel subject-author-abstract referral service. A...
Acquisition of a visual discrimination and reversal learning task by Labrador retrievers.
Lazarowski, Lucia; Foster, Melanie L; Gruen, Margaret E; Sherman, Barbara L; Case, Beth C; Fish, Richard E; Milgram, Norton W; Dorman, David C
2014-05-01
Optimal cognitive ability is likely important for military working dogs (MWD) trained to detect explosives. An assessment of a dog's ability to rapidly learn discriminations might be useful in the MWD selection process. In this study, visual discrimination and reversal tasks were used to assess cognitive performance in Labrador retrievers selected for an explosives detection program using a modified version of the Toronto General Testing Apparatus (TGTA), a system developed for assessing performance in a battery of neuropsychological tests in canines. The results of the current study revealed that, as previously found with beagles tested using the TGTA, Labrador retrievers (N = 16) readily acquired both tasks and learned the discrimination task significantly faster than the reversal task. The present study confirmed that the modified TGTA system is suitable for cognitive evaluations in Labrador retriever MWDs and can be used to further explore effects of sex, phenotype, age, and other factors in relation to canine cognition and learning, and may provide an additional screening tool for MWD selection.
Retrieval and phenomenology of autobiographical memories in blind individuals.
Tekcan, Ali Í; Yılmaz, Engin; Kızılöz, Burcu Kaya; Karadöller, Dilay Z; Mutafoğlu, Merve; Erciyes, Aslı Aktan
2015-01-01
Although visual imagery is argued to be an essential component of autobiographical memory, there have been surprisingly few studies on autobiographical memory processes in blind individuals, who have had no or limited visual input. The purpose of the present study was to investigate how blindness affects retrieval and phenomenology of autobiographical memories. We asked 48 congenital/early blind and 48 sighted participants to recall autobiographical memories in response to six cue words, and to fill out the Autobiographical Memory Questionnaire measuring a number of variables including imagery, belief and recollective experience associated with each memory. Blind participants retrieved fewer memories and reported higher auditory imagery at retrieval than sighted participants. Moreover, within the blind group, participants with total blindness reported higher auditory imagery than those with some light perception. Blind participants also assigned higher importance, belief and recollection ratings to their memories than sighted participants. Importantly, these group differences remained the same for recent as well as childhood memories.
Retrieval analysis of ceramic-coated metal-on-polyethylene total hip replacements.
Khatkar, Harman; Hothi, Harry; de Villiers, Danielle; Lausmann, Christian; Kendoff, Daniel; Gehrke, Thorsten; Skinner, John; Hart, Alister
2017-06-01
Ceramic coatings have been used in metal-on-polyethylene (MOP) hips to reduce the risk of wear and also infection; the clinical efficacy of this remains unclear. This retrieval study sought to better understand the performance of coated bearing surfaces. Forty-three coated MOP components were analysed post-retrieval for evidence of coating loss and gross polyethylene wear. Coating loss was graded using a visual semi-quantitative protocol. Evidence of gross polyethylene wear was determined by radiographic analysis and visual inspection of the retrieved implants. All components with gross polyethylene wear (n = 10) were revised due to a malfunctioning acetabular component; 35 % (n = 15) of implants exhibited visible coating loss and the incidence of polyethylene wear in samples with coating loss was 54 %, significantly (p = 0.02) elevated compared to samples with intact coatings (14 %). In this study we found evidence of coating loss on metal femoral heads which was associated with increased wear of the corresponding polyethylene acetabular cups.
Interactive radiographic image retrieval system.
Kundu, Malay Kumar; Chowdhury, Manish; Das, Sudeb
2017-02-01
Content based medical image retrieval (CBMIR) systems enable fast diagnosis through quantitative assessment of the visual information and is an active research topic over the past few decades. Most of the state-of-the-art CBMIR systems suffer from various problems: computationally expensive due to the usage of high dimensional feature vectors and complex classifier/clustering schemes. Inability to properly handle the "semantic gap" and the high intra-class versus inter-class variability problem of the medical image database (like radiographic image database). This yields an exigent demand for developing highly effective and computationally efficient retrieval system. We propose a novel interactive two-stage CBMIR system for diverse collection of medical radiographic images. Initially, Pulse Coupled Neural Network based shape features are used to find out the most probable (similar) image classes using a novel "similarity positional score" mechanism. This is followed by retrieval using Non-subsampled Contourlet Transform based texture features considering only the images of the pre-identified classes. Maximal information compression index is used for unsupervised feature selection to achieve better results. To reduce the semantic gap problem, the proposed system uses a novel fuzzy index based relevance feedback mechanism by incorporating subjectivity of human perception in an analytic manner. Extensive experiments were carried out to evaluate the effectiveness of the proposed CBMIR system on a subset of Image Retrieval in Medical Applications (IRMA)-2009 database consisting of 10,902 labeled radiographic images of 57 different modalities. We obtained overall average precision of around 98% after only 2-3 iterations of relevance feedback mechanism. We assessed the results by comparisons with some of the state-of-the-art CBMIR systems for radiographic images. Unlike most of the existing CBMIR systems, in the proposed two-stage hierarchical framework, main importance is given on constructing efficient and compact feature vector representation, search-space reduction and handling the "semantic gap" problem effectively, without compromising the retrieval performance. Experimental results and comparisons show that the proposed system performs efficiently in the radiographic medical image retrieval field. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
A Conceptual Model of the Cognitive Processing of Environmental Distance Information
NASA Astrophysics Data System (ADS)
Montello, Daniel R.
I review theories and research on the cognitive processing of environmental distance information by humans, particularly that acquired via direct experience in the environment. The cognitive processes I consider for acquiring and thinking about environmental distance information include working-memory, nonmediated, hybrid, and simple-retrieval processes. Based on my review of the research literature, and additional considerations about the sources of distance information and the situations in which it is used, I propose an integrative conceptual model to explain the cognitive processing of distance information that takes account of the plurality of possible processes and information sources, and describes conditions under which particular processes and sources are likely to operate. The mechanism of summing vista distances is identified as widely important in situations with good visual access to the environment. Heuristics based on time, effort, or other information are likely to play their most important role when sensory access is restricted.
A new metaphor for projection-based visual analysis and data exploration
NASA Astrophysics Data System (ADS)
Schreck, Tobias; Panse, Christian
2007-01-01
In many important application domains such as Business and Finance, Process Monitoring, and Security, huge and quickly increasing volumes of complex data are collected. Strong efforts are underway developing automatic and interactive analysis tools for mining useful information from these data repositories. Many data analysis algorithms require an appropriate definition of similarity (or distance) between data instances to allow meaningful clustering, classification, and retrieval, among other analysis tasks. Projection-based data visualization is highly interesting (a) for visual discrimination analysis of a data set within a given similarity definition, and (b) for comparative analysis of similarity characteristics of a given data set represented by different similarity definitions. We introduce an intuitive and effective novel approach for projection-based similarity visualization for interactive discrimination analysis, data exploration, and visual evaluation of metric space effectiveness. The approach is based on the convex hull metaphor for visually aggregating sets of points in projected space, and it can be used with a variety of different projection techniques. The effectiveness of the approach is demonstrated by application on two well-known data sets. Statistical evidence supporting the validity of the hull metaphor is presented. We advocate the hull-based approach over the standard symbol-based approach to projection visualization, as it allows a more effective perception of similarity relationships and class distribution characteristics.
Functional anatomic studies of memory retrieval for auditory words and visual pictures.
Buckner, R L; Raichle, M E; Miezin, F M; Petersen, S E
1996-10-01
Functional neuroimaging with positron emission tomography was used to study brain areas activated during memory retrieval. Subjects (n = 15) recalled items from a recent study episode (episodic memory) during two paired-associate recall tasks. The tasks differed in that PICTURE RECALL required pictorial retrieval, whereas AUDITORY WORD RECALL required word retrieval. Word REPETITION and REST served as two reference tasks. Comparing recall with repetition revealed the following observations. (1) Right anterior prefrontal activation (similar to that seen in several previous experiments), in addition to bilateral frontal-opercular and anterior cingulate activations. (2) An anterior subdivision of medial frontal cortex [pre-supplementary motor area (SMA)] was activated, which could be dissociated from a more posterior area (SMA proper). (3) Parietal areas were activated, including a posterior medial area near precuneus, that could be dissociated from an anterior parietal area that was deactivated. (4) Multiple medial and lateral cerebellar areas were activated. Comparing recall with rest revealed similar activations, except right prefrontal activation was minimal and activations related to motor and auditory demands became apparent (e.g., bilateral motor and temporal cortex). Directly comparing picture recall with auditory word recall revealed few notable activations. Taken together, these findings suggest a pathway that is commonly used during the episodic retrieval of picture and word stimuli under these conditions. Many areas in this pathway overlap with areas previously activated by a different set of retrieval tasks using stem-cued recall, demonstrating their generality. Examination of activations within individual subjects in relation to structural magnetic resonance images provided an-atomic information about the location of these activations. Such data, when combined with the dissociations between functional areas, provide an increasingly detailed picture of the brain pathways involved in episodic retrieval tasks.
45 CFR 205.35 - Mechanized claims processing and information retrieval systems; definitions.
Code of Federal Regulations, 2012 CFR
2012-10-01
... claims processing and information retrieval systems; definitions. Section 205.35 through 205.38 contain...: (a) A mechanized claims processing and information retrieval system, hereafter referred to as an automated application processing and information retrieval system (APIRS), or the system, means a system of...
45 CFR 205.35 - Mechanized claims processing and information retrieval systems; definitions.
Code of Federal Regulations, 2013 CFR
2013-10-01
... claims processing and information retrieval systems; definitions. Section 205.35 through 205.38 contain...: (a) A mechanized claims processing and information retrieval system, hereafter referred to as an automated application processing and information retrieval system (APIRS), or the system, means a system of...
Graph-Based Interactive Bibliographic Information Retrieval Systems
ERIC Educational Resources Information Center
Zhu, Yongjun
2017-01-01
In the big data era, we have witnessed the explosion of scholarly literature. This explosion has imposed challenges to the retrieval of bibliographic information. Retrieval of intended bibliographic information has become challenging due to the overwhelming search results returned by bibliographic information retrieval systems for given input…
45 CFR 205.35 - Mechanized claims processing and information retrieval systems; definitions.
Code of Federal Regulations, 2014 CFR
2014-10-01
... claims processing and information retrieval systems; definitions. Section 205.35 through 205.38 contain...: (a) A mechanized claims processing and information retrieval system, hereafter referred to as an automated application processing and information retrieval system (APIRS), or the system, means a system of...
Differential Effects of Music and Video Gaming During Breaks on Auditory and Visual Learning.
Liu, Shuyan; Kuschpel, Maxim S; Schad, Daniel J; Heinz, Andreas; Rapp, Michael A
2015-11-01
The interruption of learning processes by breaks filled with diverse activities is common in everyday life. This study investigated the effects of active computer gaming and passive relaxation (rest and music) breaks on auditory versus visual memory performance. Young adults were exposed to breaks involving (a) open eyes resting, (b) listening to music, and (c) playing a video game, immediately after memorizing auditory versus visual stimuli. To assess learning performance, words were recalled directly after the break (an 8:30 minute delay) and were recalled and recognized again after 7 days. Based on linear mixed-effects modeling, it was found that playing the Angry Birds video game during a short learning break impaired long-term retrieval in auditory learning but enhanced long-term retrieval in visual learning compared with the music and rest conditions. These differential effects of video games on visual versus auditory learning suggest specific interference of common break activities on learning.
A computer system for the storage and retrieval of gravity data, Kingdom of Saudi Arabia
Godson, Richard H.; Andreasen, Gordon H.
1974-01-01
A computer system has been developed for the systematic storage and retrieval of gravity data. All pertinent facts relating to gravity station measurements and computed Bouguer values may be retrieved either by project name or by geographical coordinates. Features of the system include visual display in the form of printer listings of gravity data and printer plots of station locations. The retrieved data format interfaces with the format of GEOPAC, a system of computer programs designed for the analysis of geophysical data.
Clustering document fragments using background color and texture information
NASA Astrophysics Data System (ADS)
Chanda, Sukalpa; Franke, Katrin; Pal, Umapada
2012-01-01
Forensic analysis of questioned documents sometimes can be extensively data intensive. A forensic expert might need to analyze a heap of document fragments and in such cases to ensure reliability he/she should focus only on relevant evidences hidden in those document fragments. Relevant document retrieval needs finding of similar document fragments. One notion of obtaining such similar documents could be by using document fragment's physical characteristics like color, texture, etc. In this article we propose an automatic scheme to retrieve similar document fragments based on visual appearance of document paper and texture. Multispectral color characteristics using biologically inspired color differentiation techniques are implemented here. This is done by projecting document color characteristics to Lab color space. Gabor filter-based texture analysis is used to identify document texture. It is desired that document fragments from same source will have similar color and texture. For clustering similar document fragments of our test dataset we use a Self Organizing Map (SOM) of dimension 5×5, where the document color and texture information are used as features. We obtained an encouraging accuracy of 97.17% from 1063 test images.
Knowing what, where, and when: event comprehension in language processing.
Kukona, Anuenue; Altmann, Gerry T M; Kamide, Yuki
2014-10-01
We investigated the retrieval of location information, and the deployment of attention to these locations, following (described) event-related location changes. In two visual world experiments, listeners viewed arrays with containers like a bowl, jar, pan, and jug, while hearing sentences like "The boy will pour the sweetcorn from the bowl into the jar, and he will pour the gravy from the pan into the jug. And then, he will taste the sweetcorn". At the discourse-final "sweetcorn", listeners fixated context-relevant "Target" containers most (jar). Crucially, we also observed two forms of competition: listeners fixated containers that were not directly referred to but associated with "sweetcorn" (bowl), and containers that played the same role as Targets (goals of moving events; jug), more than distractors (pan). These results suggest that event-related location changes are encoded across representations that compete for comprehenders' attention, such that listeners retrieve, and fixate, locations that are not referred to in the unfolding language, but related to them via object or role information. Copyright © 2014 Elsevier B.V. All rights reserved.
Enhancing biomedical text summarization using semantic relation extraction.
Shang, Yue; Li, Yanpeng; Lin, Hongfei; Yang, Zhihao
2011-01-01
Automatic text summarization for a biomedical concept can help researchers to get the key points of a certain topic from large amount of biomedical literature efficiently. In this paper, we present a method for generating text summary for a given biomedical concept, e.g., H1N1 disease, from multiple documents based on semantic relation extraction. Our approach includes three stages: 1) We extract semantic relations in each sentence using the semantic knowledge representation tool SemRep. 2) We develop a relation-level retrieval method to select the relations most relevant to each query concept and visualize them in a graphic representation. 3) For relations in the relevant set, we extract informative sentences that can interpret them from the document collection to generate text summary using an information retrieval based method. Our major focus in this work is to investigate the contribution of semantic relation extraction to the task of biomedical text summarization. The experimental results on summarization for a set of diseases show that the introduction of semantic knowledge improves the performance and our results are better than the MEAD system, a well-known tool for text summarization.
Learning and Recognition of a Non-conscious Sequence of Events in Human Primary Visual Cortex.
Rosenthal, Clive R; Andrews, Samantha K; Antoniades, Chrystalina A; Kennard, Christopher; Soto, David
2016-03-21
Human primary visual cortex (V1) has long been associated with learning simple low-level visual discriminations [1] and is classically considered outside of neural systems that support high-level cognitive behavior in contexts that differ from the original conditions of learning, such as recognition memory [2, 3]. Here, we used a novel fMRI-based dichoptic masking protocol-designed to induce activity in V1, without modulation from visual awareness-to test whether human V1 is implicated in human observers rapidly learning and then later (15-20 min) recognizing a non-conscious and complex (second-order) visuospatial sequence. Learning was associated with a change in V1 activity, as part of a temporo-occipital and basal ganglia network, which is at variance with the cortico-cerebellar network identified in prior studies of "implicit" sequence learning that involved motor responses and visible stimuli (e.g., [4]). Recognition memory was associated with V1 activity, as part of a temporo-occipital network involving the hippocampus, under conditions that were not imputable to mechanisms associated with conscious retrieval. Notably, the V1 responses during learning and recognition separately predicted non-conscious recognition memory, and functional coupling between V1 and the hippocampus was enhanced for old retrieval cues. The results provide a basis for novel hypotheses about the signals that can drive recognition memory, because these data (1) identify human V1 with a memory network that can code complex associative serial visuospatial information and support later non-conscious recognition memory-guided behavior (cf. [5]) and (2) align with mouse models of experience-dependent V1 plasticity in learning and memory [6]. Copyright © 2016 Elsevier Ltd. All rights reserved.
Semantic congruence affects hippocampal response to repetition of visual associations.
McAndrews, Mary Pat; Girard, Todd A; Wilkins, Leanne K; McCormick, Cornelia
2016-09-01
Recent research has shown complementary engagement of the hippocampus and medial prefrontal cortex (mPFC) in encoding and retrieving associations based on pre-existing or experimentally-induced schemas, such that the latter supports schema-congruent information whereas the former is more engaged for incongruent or novel associations. Here, we attempted to explore some of the boundary conditions in the relative involvement of those structures in short-term memory for visual associations. The current literature is based primarily on intentional evaluation of schema-target congruence and on study-test paradigms with relatively long delays between learning and retrieval. We used a continuous recognition paradigm to investigate hippocampal and mPFC activation to first and second presentations of scene-object pairs as a function of semantic congruence between the elements (e.g., beach-seashell versus schoolyard-lamp). All items were identical at first and second presentation and the context scene, which was presented 500ms prior to the appearance of the target object, was incidental to the task which required a recognition response to the central target only. Very short lags 2-8 intervening stimuli occurred between presentations. Encoding the targets with congruent contexts was associated with increased activation in visual cortical regions at initial presentation and faster response time at repetition, but we did not find enhanced activation in mPFC relative to incongruent stimuli at either presentation. We did observe enhanced activation in the right anterior hippocampus, as well as regions in visual and lateral temporal and frontal cortical regions, for the repetition of incongruent scene-object pairs. This pattern demonstrates rapid and incidental effects of schema processing in hippocampal, but not mPFC, engagement during continuous recognition. Copyright © 2016 Elsevier Ltd. All rights reserved.
Term Relevance Weights in On-Line Information Retrieval
ERIC Educational Resources Information Center
Salton, G.; Waldstein, R. K.
1978-01-01
Term relevance weighting systems in interactive information retrieval are reviewed. An experiment in which information retrieval users ranked query terms in decreasing order of presumed importance prior to actual search and retrieval is described. (Author/KP)
Viewpoint Dependent Imaging: An Interactive Stereoscopic Display
NASA Astrophysics Data System (ADS)
Fisher, Scott
1983-04-01
Design and implementation of a viewpoint Dependent imaging system is described. The resultant display is an interactive, lifesize, stereoscopic image. that becomes a window into a three dimensional visual environment. As the user physically changes his viewpoint of the represented data in relation to the display surface, the image is continuously updated. The changing viewpoints are retrieved from a comprehensive, stereoscopic image array stored on computer controlled, optical videodisc and fluidly presented. in coordination with the viewer's, movements as detected by a body-tracking device. This imaging system is an attempt to more closely represent an observers interactive perceptual experience of the visual world by presenting sensory information cues not offered by traditional media technologies: binocular parallax, motion parallax, and motion perspective. Unlike holographic imaging, this display requires, relatively low bandwidth.
Zhang, Qiong; van Vugt, Marieke; Borst, Jelmer P; Anderson, John R
2018-07-01
In this study, we investigated the time course and neural correlates of the retrieval process underlying visual working memory. We made use of a rare dataset in which the same task was recorded using both scalp electroencephalography (EEG) and Electrocorticography (ECoG), respectively. This allowed us to examine with great spatial and temporal detail how the retrieval process works, and in particular how the medial temporal lobe (MTL) is involved. In each trial, participants judged whether a probe face had been among a set of recently studied faces. With a method that combines hidden semi-Markov models and multivariate pattern analysis, the neural signal was decomposed into a sequence of latent cognitive stages with information about their durations on a trial-by-trial basis. Analyzed separately, EEG and ECoG data yielded converging results on discovered stages and their interpretation, which reflected 1) a brief pre-attention stage, 2) encoding the stimulus, 3) retrieving the studied set, and 4) making a decision. Combining these stages with the high spatial resolution of ECoG suggested that activity in the temporal cortex reflected item familiarity in the retrieval stage; and that once retrieval is complete, there is active maintenance of the studied face set in the decision stage in the MTL. During this same period, the frontal cortex guides the decision by means of theta coupling with the MTL. These observations generalize previous findings on the role of MTL theta from long-term memory tasks to short-term memory tasks. Copyright © 2018 Elsevier Inc. All rights reserved.
Wing, Erik A.; Ritchey, Maureen; Cabeza, Roberto
2015-01-01
Neurobiological memory models assume memory traces are stored in neocortex, with pointers in the hippocampus, and are then reactivated during retrieval, yielding the experience of remembering. Whereas most prior neuroimaging studies on reactivation have focused on the reactivation of sets or categories of items, the current study sought to identify cortical patterns pertaining to memory for individual scenes. During encoding, participants viewed pictures of scenes paired with matching labels (e.g., “barn,” “tunnel”), and, during retrieval, they recalled the scenes in response to the labels and rated the quality of their visual memories. Using representational similarity analyses, we interrogated the similarity between activation patterns during encoding and retrieval both at the item level (individual scenes) and the set level (all scenes). The study yielded four main findings. First, in occipitotemporal cortex, memory success increased with encoding-retrieval similarity (ERS) at the item level but not at the set level, indicating the reactivation of individual scenes. Second, in ventrolateral pFC, memory increased with ERS for both item and set levels, indicating the recapitulation of memory processes that benefit encoding and retrieval of all scenes. Third, in retrosplenial/posterior cingulate cortex, ERS was sensitive to individual scene information irrespective of memory success, suggesting automatic activation of scene contexts. Finally, consistent with neurobiological models, hippocampal activity during encoding predicted the subsequent reactivation of individual items. These findings show the promise of studying memory with greater specificity by isolating individual mnemonic representations and determining their relationship to factors like the detail with which past events are remembered. PMID:25313659
Neural Signatures of Stimulus Features in Visual Working Memory—A Spatiotemporal Approach
Jackson, Margaret C.; Klein, Christoph; Mohr, Harald; Shapiro, Kimron L.; Linden, David E. J.
2010-01-01
We examined the neural signatures of stimulus features in visual working memory (WM) by integrating functional magnetic resonance imaging (fMRI) and event-related potential data recorded during mental manipulation of colors, rotation angles, and color–angle conjunctions. The N200, negative slow wave, and P3b were modulated by the information content of WM, and an fMRI-constrained source model revealed a progression in neural activity from posterior visual areas to higher order areas in the ventral and dorsal processing streams. Color processing was associated with activity in inferior frontal gyrus during encoding and retrieval, whereas angle processing involved right parietal regions during the delay interval. WM for color–angle conjunctions did not involve any additional neural processes. The finding that different patterns of brain activity underlie WM for color and spatial information is consistent with ideas that the ventral/dorsal “what/where” segregation of perceptual processing influences WM organization. The absence of characteristic signatures of conjunction-related brain activity, which was generally intermediate between the 2 single conditions, suggests that conjunction judgments are based on the coordinated activity of these 2 streams. PMID:19429863
pyAudioAnalysis: An Open-Source Python Library for Audio Signal Analysis.
Giannakopoulos, Theodoros
2015-01-01
Audio information plays a rather important role in the increasing digital content that is available today, resulting in a need for methodologies that automatically analyze such content: audio event recognition for home automations and surveillance systems, speech recognition, music information retrieval, multimodal analysis (e.g. audio-visual analysis of online videos for content-based recommendation), etc. This paper presents pyAudioAnalysis, an open-source Python library that provides a wide range of audio analysis procedures including: feature extraction, classification of audio signals, supervised and unsupervised segmentation and content visualization. pyAudioAnalysis is licensed under the Apache License and is available at GitHub (https://github.com/tyiannak/pyAudioAnalysis/). Here we present the theoretical background behind the wide range of the implemented methodologies, along with evaluation metrics for some of the methods. pyAudioAnalysis has been already used in several audio analysis research applications: smart-home functionalities through audio event detection, speech emotion recognition, depression classification based on audio-visual features, music segmentation, multimodal content-based movie recommendation and health applications (e.g. monitoring eating habits). The feedback provided from all these particular audio applications has led to practical enhancement of the library.
pyAudioAnalysis: An Open-Source Python Library for Audio Signal Analysis
Giannakopoulos, Theodoros
2015-01-01
Audio information plays a rather important role in the increasing digital content that is available today, resulting in a need for methodologies that automatically analyze such content: audio event recognition for home automations and surveillance systems, speech recognition, music information retrieval, multimodal analysis (e.g. audio-visual analysis of online videos for content-based recommendation), etc. This paper presents pyAudioAnalysis, an open-source Python library that provides a wide range of audio analysis procedures including: feature extraction, classification of audio signals, supervised and unsupervised segmentation and content visualization. pyAudioAnalysis is licensed under the Apache License and is available at GitHub (https://github.com/tyiannak/pyAudioAnalysis/). Here we present the theoretical background behind the wide range of the implemented methodologies, along with evaluation metrics for some of the methods. pyAudioAnalysis has been already used in several audio analysis research applications: smart-home functionalities through audio event detection, speech emotion recognition, depression classification based on audio-visual features, music segmentation, multimodal content-based movie recommendation and health applications (e.g. monitoring eating habits). The feedback provided from all these particular audio applications has led to practical enhancement of the library. PMID:26656189
Magnostics: Image-Based Search of Interesting Matrix Views for Guided Network Exploration.
Behrisch, Michael; Bach, Benjamin; Hund, Michael; Delz, Michael; Von Ruden, Laura; Fekete, Jean-Daniel; Schreck, Tobias
2017-01-01
In this work we address the problem of retrieving potentially interesting matrix views to support the exploration of networks. We introduce Matrix Diagnostics (or Magnostics), following in spirit related approaches for rating and ranking other visualization techniques, such as Scagnostics for scatter plots. Our approach ranks matrix views according to the appearance of specific visual patterns, such as blocks and lines, indicating the existence of topological motifs in the data, such as clusters, bi-graphs, or central nodes. Magnostics can be used to analyze, query, or search for visually similar matrices in large collections, or to assess the quality of matrix reordering algorithms. While many feature descriptors for image analyzes exist, there is no evidence how they perform for detecting patterns in matrices. In order to make an informed choice of feature descriptors for matrix diagnostics, we evaluate 30 feature descriptors-27 existing ones and three new descriptors that we designed specifically for MAGNOSTICS-with respect to four criteria: pattern response, pattern variability, pattern sensibility, and pattern discrimination. We conclude with an informed set of six descriptors as most appropriate for Magnostics and demonstrate their application in two scenarios; exploring a large collection of matrices and analyzing temporal networks.
Code of Federal Regulations, 2011 CFR
2011-10-01
... and information retrieval systems. 433.116 Section 433.116 Public Health CENTERS FOR MEDICARE... FISCAL ADMINISTRATION Mechanized Claims Processing and Information Retrieval Systems § 433.116 FFP for operation of mechanized claims processing and information retrieval systems. (a) Subject to paragraph (j) of...
7 CFR 277.18 - Establishment of an Automated Data Processing (ADP) and Information Retrieval System.
Code of Federal Regulations, 2012 CFR
2012-01-01
...) and Information Retrieval System. 277.18 Section 277.18 Agriculture Regulations of the Department of... Data Processing (ADP) and Information Retrieval System. (a) Scope and application. This section... costs of planning, design, development or installation of ADP and information retrieval systems if the...
Code of Federal Regulations, 2013 CFR
2013-10-01
... and information retrieval systems. 433.116 Section 433.116 Public Health CENTERS FOR MEDICARE... FISCAL ADMINISTRATION Mechanized Claims Processing and Information Retrieval Systems § 433.116 FFP for operation of mechanized claims processing and information retrieval systems. (a) Subject to paragraph (j) of...
7 CFR 277.18 - Establishment of an Automated Data Processing (ADP) and Information Retrieval System.
Code of Federal Regulations, 2014 CFR
2014-01-01
...) and Information Retrieval System. 277.18 Section 277.18 Agriculture Regulations of the Department of... Data Processing (ADP) and Information Retrieval System. (a) Scope and application. This section... costs of planning, design, development or installation of ADP and information retrieval systems if the...
Code of Federal Regulations, 2014 CFR
2014-10-01
... and information retrieval systems. 433.116 Section 433.116 Public Health CENTERS FOR MEDICARE... FISCAL ADMINISTRATION Mechanized Claims Processing and Information Retrieval Systems § 433.116 FFP for operation of mechanized claims processing and information retrieval systems. (a) Subject to paragraph (j) of...
7 CFR 277.18 - Establishment of an Automated Data Processing (ADP) and Information Retrieval System.
Code of Federal Regulations, 2011 CFR
2011-01-01
...) and Information Retrieval System. 277.18 Section 277.18 Agriculture Regulations of the Department of... Data Processing (ADP) and Information Retrieval System. (a) Scope and application. This section... costs of planning, design, development or installation of ADP and information retrieval systems if the...
Code of Federal Regulations, 2012 CFR
2012-10-01
... and information retrieval systems. 433.116 Section 433.116 Public Health CENTERS FOR MEDICARE... FISCAL ADMINISTRATION Mechanized Claims Processing and Information Retrieval Systems § 433.116 FFP for operation of mechanized claims processing and information retrieval systems. (a) Subject to paragraph (j) of...
7 CFR 277.18 - Establishment of an Automated Data Processing (ADP) and Information Retrieval System.
Code of Federal Regulations, 2013 CFR
2013-01-01
...) and Information Retrieval System. 277.18 Section 277.18 Agriculture Regulations of the Department of... Data Processing (ADP) and Information Retrieval System. (a) Scope and application. This section... costs of planning, design, development or installation of ADP and information retrieval systems if the...
Context generalization in Drosophila visual learning requires the mushroom bodies
NASA Astrophysics Data System (ADS)
Liu, Li; Wolf, Reinhard; Ernst, Roman; Heisenberg, Martin
1999-08-01
The world is permanently changing. Laboratory experiments on learning and memory normally minimize this feature of reality, keeping all conditions except the conditioned and unconditioned stimuli as constant as possible. In the real world, however, animals need to extract from the universe of sensory signals the actual predictors of salient events by separating them from non-predictive stimuli (context). In principle, this can be achieved ifonly those sensory inputs that resemble the reinforcer in theirtemporal structure are taken as predictors. Here we study visual learning in the fly Drosophila melanogaster, using a flight simulator,, and show that memory retrieval is, indeed, partially context-independent. Moreover, we show that the mushroom bodies, which are required for olfactory but not visual or tactile learning, effectively support context generalization. In visual learning in Drosophila, it appears that a facilitating effect of context cues for memory retrieval is the default state, whereas making recall context-independent requires additional processing.
Kinesthetic alexia due to left parietal lobe lesions.
Ihori, Nami; Kawamura, Mitsuru; Araki, Shigeo; Kawachi, Juro
2002-01-01
To investigate the neuropsychological mechanisms of kinesthetic alexia, we asked 7 patients who showed kinesthetic alexia with preserved visual reading after damage to the left parietal region to perform tasks consisting of kinesthetic written reproduction (writing down the same letter as the kinesthetic stimulus), kinesthetic reading aloud, visual written reproduction (copying letters), and visual reading aloud of hiragana (Japanese phonograms). We compared the performance in these tasks and the lesion sites in each patient. The results suggested that deficits in any one of the following functions might cause kinesthetic alexia: (1) the retrieval of kinesthetic images (motor engrams) of characters from kinesthetic stimuli, (2) kinesthetic images themselves, (3) access to cross-modal association from kinesthetic images, and (4) cross-modal association itself (retrieval of auditory and visual images from kinesthetic images of characters). Each of these factors seemed to be related to different lesion sites in the left parietal lobe. Copyright 2002 S. Karger AG, Basel
Behind Mathematical Learning Disabilities: What about Visual Perception and Motor Skills?
ERIC Educational Resources Information Center
Pieters, Stefanie; Desoete, Annemie; Roeyers, Herbert; Vanderswalmen, Ruth; Van Waelvelde, Hilde
2012-01-01
In a sample of 39 children with mathematical learning disabilities (MLD) and 106 typically developing controls belonging to three control groups of three different ages, we found that visual perception, motor skills and visual-motor integration explained a substantial proportion of the variance in either number fact retrieval or procedural…
Unsupervised Deep Hashing With Pseudo Labels for Scalable Image Retrieval.
Zhang, Haofeng; Liu, Li; Long, Yang; Shao, Ling
2018-04-01
In order to achieve efficient similarity searching, hash functions are designed to encode images into low-dimensional binary codes with the constraint that similar features will have a short distance in the projected Hamming space. Recently, deep learning-based methods have become more popular, and outperform traditional non-deep methods. However, without label information, most state-of-the-art unsupervised deep hashing (DH) algorithms suffer from severe performance degradation for unsupervised scenarios. One of the main reasons is that the ad-hoc encoding process cannot properly capture the visual feature distribution. In this paper, we propose a novel unsupervised framework that has two main contributions: 1) we convert the unsupervised DH model into supervised by discovering pseudo labels; 2) the framework unifies likelihood maximization, mutual information maximization, and quantization error minimization so that the pseudo labels can maximumly preserve the distribution of visual features. Extensive experiments on three popular data sets demonstrate the advantages of the proposed method, which leads to significant performance improvement over the state-of-the-art unsupervised hashing algorithms.
Fourier domain image fusion for differential X-ray phase-contrast breast imaging.
Coello, Eduardo; Sperl, Jonathan I; Bequé, Dirk; Benz, Tobias; Scherer, Kai; Herzen, Julia; Sztrókay-Gaul, Anikó; Hellerhoff, Karin; Pfeiffer, Franz; Cozzini, Cristina; Grandl, Susanne
2017-04-01
X-Ray Phase-Contrast (XPC) imaging is a novel technology with a great potential for applications in clinical practice, with breast imaging being of special interest. This work introduces an intuitive methodology to combine and visualize relevant diagnostic features, present in the X-ray attenuation, phase shift and scattering information retrieved in XPC imaging, using a Fourier domain fusion algorithm. The method allows to present complementary information from the three acquired signals in one single image, minimizing the noise component and maintaining visual similarity to a conventional X-ray image, but with noticeable enhancement in diagnostic features, details and resolution. Radiologists experienced in mammography applied the image fusion method to XPC measurements of mastectomy samples and evaluated the feature content of each input and the fused image. This assessment validated that the combination of all the relevant diagnostic features, contained in the XPC images, was present in the fused image as well. Copyright © 2017 Elsevier B.V. All rights reserved.
Meyer, Sascha R A; De Jonghe, Jos F M; Schmand, Ben; Ponds, Rudolf W H M
2018-05-16
Episodic memory tests need to determine the degree to which patients with moderate to severe memory deficits can still benefit from retrieval support. Especially in the case of Alzheimer's disease (AD), this may support health care to be more closely aligned with patients' memory capacities. We investigated whether the different measures of episodic memory of the Visual Association Test-Extended (VAT-E) can provide a more detailed and informative assessment on memory disturbances across a broad range of cognitive decline, from normal to severe impairment as seen in AD, by examining differences in floor effects. The VAT-E consists of 24 pairs of black-and-white line drawings. In a within-group design, we compared score distributions of VAT-E subtests in healthy elderly controls, mild cognitive impairment (MCI), and AD (n = 144), as well as in relation to global cognitive impairment. Paired associate recall showed a floor effect in 41% of MCI patients and 62% of AD patients. Free recall showed a floor effect in 73% of MCI patients and 84% of AD patients. Multiple-choice cued recognition did not show a floor effect in either of the patient groups. We conclude that the VAT-E covers a broad range of episodic memory decline in patients. As expected, paired associate recall was of intermediate difficulty, free recall was most difficult, and multiple-choice cued recognition was least difficult for patients. These varying levels of difficulty enable a more accurate determination of the level of retrieval support that can still benefit patients across a broad range of cognitive decline.
Tracking down the path of memory: eye scanpaths facilitate retrieval of visuospatial information.
Bochynska, Agata; Laeng, Bruno
2015-09-01
Recent research points to a crucial role of eye fixations on the same spatial locations where an item appeared when learned, for the successful retrieval of stored information (e.g., Laeng et al. in Cognition 131:263-283, 2014. doi: 10.1016/j.cognition.2014.01.003 ). However, evidence about whether the specific temporal sequence (i.e., scanpath) of these eye fixations is also relevant for the accuracy of memory remains unclear. In the current study, eye fixations were recorded while looking at a checkerboard-like pattern. In a recognition session (48 h later), animations were shown where each square that formed the pattern was presented one by one, either according to the same, idiosyncratic, temporal sequence in which they were originally viewed by each participant or in a shuffled sequence although the squares were, in both conditions, always in their correct positions. Afterward, participants judged whether they had seen the same pattern before or not. Showing the elements serially according to the original scanpath's sequence yielded a significantly better recognition performance than the shuffled condition. In a forced fixation condition, where the gaze was maintained on the center of the screen, the advantage of memory accuracy for same versus shuffled scanpaths disappeared. Concluding, gaze scanpaths (i.e., the order of fixations and not simply their positions) are functional to visual memory and physical reenacting of the original, embodied, perception can facilitate retrieval.
Application of MPEG-7 descriptors for content-based indexing of sports videos
NASA Astrophysics Data System (ADS)
Hoeynck, Michael; Auweiler, Thorsten; Ohm, Jens-Rainer
2003-06-01
The amount of multimedia data available worldwide is increasing every day. There is a vital need to annotate multimedia data in order to allow universal content access and to provide content-based search-and-retrieval functionalities. Since supervised video annotation can be time consuming, an automatic solution is appreciated. We review recent approaches to content-based indexing and annotation of videos for different kind of sports, and present our application for the automatic annotation of equestrian sports videos. Thereby, we especially concentrate on MPEG-7 based feature extraction and content description. We apply different visual descriptors for cut detection. Further, we extract the temporal positions of single obstacles on the course by analyzing MPEG-7 edge information and taking specific domain knowledge into account. Having determined single shot positions as well as the visual highlights, the information is jointly stored together with additional textual information in an MPEG-7 description scheme. Using this information, we generate content summaries which can be utilized in a user front-end in order to provide content-based access to the video stream, but further content-based queries and navigation on a video-on-demand streaming server.
The role of retrieval practice in memory and analogical problem-solving.
Hostetter, Autumn B; Penix, Elizabeth A; Norman, Mackenzie Z; Batsell, W Robert; Carr, Thomas H
2018-05-01
Retrieval practice (e.g., testing) has been shown to facilitate long-term retention of information. In two experiments, we examine whether retrieval practice also facilitates use of the practised information when it is needed to solve analogous problems. When retrieval practice was not limited to the information most relevant to the problems (Experiment 1), it improved memory for the information a week later compared with copying or rereading the information, although we found no evidence that it improved participants' ability to apply the information to the problems. In contrast, when retrieval practice was limited to only the information most relevant to the problems (Experiment 2), we found that retrieval practice enhanced memory for the critical information, the ability to identify the schematic similarities between the two sources of information, and the ability to apply that information to solve an analogous problem after a hint was given to do so. These results suggest that retrieval practice, through its effect on memory, can facilitate application of information to solve novel problems but has minimal effects on spontaneous realisation that the information is relevant.
A framework to explore the knowledge structure of multidisciplinary research fields.
Uddin, Shahadat; Khan, Arif; Baur, Louise A
2015-01-01
Understanding emerging areas of a multidisciplinary research field is crucial for researchers, policymakers and other stakeholders. For them a knowledge structure based on longitudinal bibliographic data can be an effective instrument. But with the vast amount of available online information it is often hard to understand the knowledge structure for data. In this paper, we present a novel approach for retrieving online bibliographic data and propose a framework for exploring knowledge structure. We also present several longitudinal analyses to interpret and visualize the last 20 years of published obesity research data.
A unified framework of image latent feature learning on Sina microblog
NASA Astrophysics Data System (ADS)
Wei, Jinjin; Jin, Zhigang; Zhou, Yuan; Zhang, Rui
2015-10-01
Large-scale user-contributed images with texts are rapidly increasing on the social media websites, such as Sina microblog. However, the noise and incomplete correspondence between the images and the texts give rise to the difficulty in precise image retrieval and ranking. In this paper, a hypergraph-based learning framework is proposed for image ranking, which simultaneously utilizes visual feature, textual content and social link information to estimate the relevance between images. Representing each image as a vertex in the hypergraph, complex relationship between images can be reflected exactly. Then updating the weight of hyperedges throughout the hypergraph learning process, the effect of different edges can be adaptively modulated in the constructed hypergraph. Furthermore, the popularity degree of the image is employed to re-rank the retrieval results. Comparative experiments on a large-scale Sina microblog data-set demonstrate the effectiveness of the proposed approach.
NASA Technical Reports Server (NTRS)
Khovanskiy, Y. D.; Kremneva, N. I.
1975-01-01
Problems and methods are discussed of automating information retrieval operations in a data bank used for long term storage and retrieval of data from scientific experiments. Existing information retrieval languages are analyzed along with those being developed. The results of studies discussing the application of the descriptive 'Kristall' language used in the 'ASIOR' automated information retrieval system are presented. The development and use of a specialized language of the classification-descriptive type, using universal decimal classification indices as the main descriptors, is described.
AIRS Version 6 Products and Data Services at NASA GES DISC
NASA Astrophysics Data System (ADS)
Ding, F.; Savtchenko, A. K.; Hearty, T. J.; Theobald, M. L.; Vollmer, B.; Esfandiari, E.
2013-12-01
The NASA Goddard Earth Sciences Data and Information Services Center (GES DISC) is the home of processing, archiving, and distribution services for data from the Atmospheric Infrared Sounder (AIRS) mission. The AIRS mission is entering its 11th year of global observations of the atmospheric state, including temperature and humidity profiles, outgoing longwave radiation, cloud properties, and trace gases. The GES DISC, in collaboration with the AIRS Project, released data from the Version 6 algorithm in early 2013. The new algorithm represents a significant improvement over previous versions in terms of greater stability, yield, and quality of products. Among the most substantial advances are: improved soundings of Tropospheric and Sea Surface Temperatures; larger improvements with increasing cloud cover; improved retrievals of surface spectral emissivity; near-complete removal of spurious temperature bias trends seen in earlier versions; substantially improved retrieval yield (i.e., number of soundings accepted for output) for climate studies; AIRS-Only retrievals with comparable accuracy to AIRS+AMSU (Advanced Microwave Sounding Unit) retrievals; and more realistic hemispheric seasonal variability and global distribution of carbon monoxide. The GES DISC is working to bring the distribution services up-to-date with these new developments. Our focus is on popular services, like variable subsetting and quality screening, which are impacted by the new elements in Version 6. Other developments in visualization services, such as Giovanni, Near-Real Time imagery, and a granule-map viewer, are progressing along with the introduction of the new data; each service presents its own challenge. This presentation will demonstrate the most significant improvements in Version 6 AIRS products, such as newly added variables (higher resolution outgoing longwave radiation, new cloud property products, etc.), the new quality control schema, and improved retrieval yields. We will also demonstrate the various distribution and visualization services for AIRS data products. The cloud properties, model physics, and water and energy cycles research communities are invited to take advantage of the improvements in Version 6 AIRS products and the various services at GES DISC which provide them.
A Neuro-Oncology Workstation for Structuring, Modeling, and Visualizing Patient Records
Hsu, William; Arnold, Corey W.; Taira, Ricky K.
2016-01-01
The patient medical record contains a wealth of information consisting of prior observations, interpretations, and interventions that need to be interpreted and applied towards decisions regarding current patient care. Given the time constraints and the large—often extraneous—amount of data available, clinicians are tasked with the challenge of performing a comprehensive review of how a disease progresses in individual patients. To facilitate this process, we demonstrate a neuro-oncology workstation that assists in structuring and visualizing medical data to promote an evidence-based approach for understanding a patient’s record. The workstation consists of three components: 1) a structuring tool that incorporates natural language processing to assist with the extraction of problems, findings, and attributes for structuring observations, events, and inferences stated within medical reports; 2) a data modeling tool that provides a comprehensive and consistent representation of concepts for the disease-specific domain; and 3) a visual workbench for visualizing, navigating, and querying the structured data to enable retrieval of relevant portions of the patient record. We discuss this workstation in the context of reviewing cases of glioblastoma multiforme patients. PMID:27583308
A Neuro-Oncology Workstation for Structuring, Modeling, and Visualizing Patient Records.
Hsu, William; Arnold, Corey W; Taira, Ricky K
2010-11-01
The patient medical record contains a wealth of information consisting of prior observations, interpretations, and interventions that need to be interpreted and applied towards decisions regarding current patient care. Given the time constraints and the large-often extraneous-amount of data available, clinicians are tasked with the challenge of performing a comprehensive review of how a disease progresses in individual patients. To facilitate this process, we demonstrate a neuro-oncology workstation that assists in structuring and visualizing medical data to promote an evidence-based approach for understanding a patient's record. The workstation consists of three components: 1) a structuring tool that incorporates natural language processing to assist with the extraction of problems, findings, and attributes for structuring observations, events, and inferences stated within medical reports; 2) a data modeling tool that provides a comprehensive and consistent representation of concepts for the disease-specific domain; and 3) a visual workbench for visualizing, navigating, and querying the structured data to enable retrieval of relevant portions of the patient record. We discuss this workstation in the context of reviewing cases of glioblastoma multiforme patients.
Eye vergence responses during a visual memory task.
Solé Puig, Maria; Romeo, August; Cañete Crespillo, Jose; Supèr, Hans
2017-02-08
In a previous report it was shown that covertly attending visual stimuli produce small convergence of the eyes, and that visual stimuli can give rise to different modulations of the angle of eye vergence, depending on their power to capture attention. Working memory is highly dependent on attention. Therefore, in this study we assessed vergence responses in a memory task. Participants scanned a set of 8 or 12 images for 10 s, and thereafter were presented with a series of single images. One half were repeat images - that is, they belonged to the initial set - and the other half were novel images. Participants were asked to indicate whether or not the images were included in the initial image set. We observed that eyes converge during scanning the set of images and during the presentation of the single images. The convergence was stronger for remembered images compared with the vergence for nonremembered images. Modulation in pupil size did not correspond to behavioural responses. The correspondence between vergence and coding/retrieval processes of memory strengthen the idea of a role for vergence in attention processing of visual information.
NASA Technical Reports Server (NTRS)
Roberts, Aaron
2005-01-01
New tools for data access and visualization promise to make the analysis of space plasma data both more efficient and more powerful, especially for answering questions about the global structure and dynamics of the Sun-Earth system. We will show how new existing tools (particularly the Virtual Space Physics Observatory-VSPO-and the Visual System for Browsing, Analysis and Retrieval of Data-ViSBARD; look for the acronyms in Google) already provide rapid access to such information as spacecraft orbits, browse plots, and detailed data, as well as visualizations that can quickly unite our view of multispacecraft observations. We will show movies illustrating multispacecraft observations of the solar wind and magnetosphere during a magnetic storm, and of simulations of 3 0-spacecraft observations derived from MHD simulations of the magnetosphere sampled along likely trajectories of the spacecraft for the MagCon mission. An important issue remaining to be solved is how best to integrate simulation data and services into the Virtual Observatory environment, and this talk will hopefully stimulate further discussion along these lines.
Mathematics and Information Retrieval.
ERIC Educational Resources Information Center
Salton, Gerald
1979-01-01
Examines the main mathematical approaches to information retrieval, including both algebraic and probabilistic models, and describes difficulties which impede formalization of information retrieval processes. A number of developments are covered where new theoretical understandings have directly led to improved retrieval techniques and operations.…
Code of Federal Regulations, 2014 CFR
2014-10-01
... claims processing and information retrieval systems. 433.127 Section 433.127 Public Health CENTERS FOR... PROGRAMS STATE FISCAL ADMINISTRATION Mechanized Claims Processing and Information Retrieval Systems § 433.127 Termination of FFP for failure to provide access to claims processing and information retrieval...
Code of Federal Regulations, 2010 CFR
2010-10-01
... and information retrieval systems. 433.116 Section 433.116 Public Health CENTERS FOR MEDICARE... FISCAL ADMINISTRATION Mechanized Claims Processing and Information Retrieval Systems § 433.116 FFP for operation of mechanized claims processing and information retrieval systems. (a) Subject to 42 CFR 433.113(c...
Code of Federal Regulations, 2011 CFR
2011-10-01
... claims processing and information retrieval systems. 433.127 Section 433.127 Public Health CENTERS FOR... PROGRAMS STATE FISCAL ADMINISTRATION Mechanized Claims Processing and Information Retrieval Systems § 433.127 Termination of FFP for failure to provide access to claims processing and information retrieval...
Code of Federal Regulations, 2010 CFR
2010-10-01
... claims processing and information retrieval systems. 433.127 Section 433.127 Public Health CENTERS FOR... PROGRAMS STATE FISCAL ADMINISTRATION Mechanized Claims Processing and Information Retrieval Systems § 433.127 Termination of FFP for failure to provide access to claims processing and information retrieval...
Code of Federal Regulations, 2013 CFR
2013-10-01
... claims processing and information retrieval systems. 433.127 Section 433.127 Public Health CENTERS FOR... PROGRAMS STATE FISCAL ADMINISTRATION Mechanized Claims Processing and Information Retrieval Systems § 433.127 Termination of FFP for failure to provide access to claims processing and information retrieval...
Code of Federal Regulations, 2012 CFR
2012-10-01
... claims processing and information retrieval systems. 433.127 Section 433.127 Public Health CENTERS FOR... PROGRAMS STATE FISCAL ADMINISTRATION Mechanized Claims Processing and Information Retrieval Systems § 433.127 Termination of FFP for failure to provide access to claims processing and information retrieval...
ERIC Educational Resources Information Center
Stirling, Keith
2000-01-01
Describes a session on information retrieval systems that planned to discuss relevance measures with Web-based information retrieval; retrieval system performance and evaluation; probabilistic independence of index terms; vector-based models; metalanguages and digital objects; how users assess the reliability, timeliness and bias of information;…
Transparent Information Systems through Gateways, Front Ends, Intermediaries, and Interfaces.
ERIC Educational Resources Information Center
Williams, Martha E.
1986-01-01
Provides overview of design requirements for transparent information retrieval (implies that user sees through complexity of retrieval activities sequence). Highlights include need for transparent systems; history of transparent retrieval research; information retrieval functions (automated converters, routers, selectors, evaluators/analyzers);…
Visual cues for the retrieval of landmark memories by navigating wood ants.
Harris, Robert A; Graham, Paul; Collett, Thomas S
2007-01-23
Even on short routes, ants can be guided by multiple visual memories. We investigate here the cues controlling memory retrieval as wood ants approach a one- or two-edged landmark to collect sucrose at a point along its base. In such tasks, ants store the desired retinal position of landmark edges at several points along their route. They guide subsequent trips by retrieving the appropriate memory and moving to bring the edges in the scene toward the stored positions. The apparent width of the landmark turns out to be a powerful cue for retrieving the desired retinal position of a landmark edge. Two other potential cues, the landmark's apparent height and the distance that the ant walks, have little effect on memory retrieval. A simple model encapsulates these conclusions and reproduces the ants' routes in several conditions. According to this model, the ant stores a look-up table. Each entry contains the apparent width of the landmark and the desired retinal position of vertical edges. The currently perceived width provides an index for retrieving the associated stored edge positions. The model accounts for the population behavior of ants and the idiosyncratic training routes of individual ants. Our results imply binding between the edge of a shape and its width and, further, imply that assessing the width of a shape does not depend on the presence of any particular local feature, such as a landmark edge. This property makes the ant's retrieval and guidance system relatively robust to edge occlusions.
Improve Biomedical Information Retrieval using Modified Learning to Rank Methods.
Xu, Bo; Lin, Hongfei; Lin, Yuan; Ma, Yunlong; Yang, Liang; Wang, Jian; Yang, Zhihao
2016-06-14
In these years, the number of biomedical articles has increased exponentially, which becomes a problem for biologists to capture all the needed information manually. Information retrieval technologies, as the core of search engines, can deal with the problem automatically, providing users with the needed information. However, it is a great challenge to apply these technologies directly for biomedical retrieval, because of the abundance of domain specific terminologies. To enhance biomedical retrieval, we propose a novel framework based on learning to rank. Learning to rank is a series of state-of-the-art information retrieval techniques, and has been proved effective in many information retrieval tasks. In the proposed framework, we attempt to tackle the problem of the abundance of terminologies by constructing ranking models, which focus on not only retrieving the most relevant documents, but also diversifying the searching results to increase the completeness of the resulting list for a given query. In the model training, we propose two novel document labeling strategies, and combine several traditional retrieval models as learning features. Besides, we also investigate the usefulness of different learning to rank approaches in our framework. Experimental results on TREC Genomics datasets demonstrate the effectiveness of our framework for biomedical information retrieval.
Competitive retrieval is not a prerequisite for forgetting in the retrieval practice paradigm.
Camp, Gino; Dalm, Sander
2016-09-01
Retrieving information from memory can lead to forgetting of other, related information. The inhibition account of this retrieval-induced forgetting effect predicts that this form of forgetting occurs when competition arises between the practiced information and the related information, leading to inhibition of the related information. In the standard retrieval practice paradigm, a retrieval practice task is used in which participants retrieve the items based on a category-plus-stem cue (e.g., FRUIT-or___). In the current experiment, participants instead generated the target based on a cue in which the first 2 letters of the target were transposed (e.g., FRUIT-roange). This noncompetitive task also induced forgetting of unpracticed items from practiced categories. This finding is inconsistent with the inhibition account, which asserts that the forgetting effect depends on competitive retrieval. We argue that interference-based accounts of forgetting and the context-based account of retrieval-induced forgetting can account for this result. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Kurtz, Camille; Depeursinge, Adrien; Napel, Sandy; Beaulieu, Christopher F.; Rubin, Daniel L.
2014-01-01
Computer-assisted image retrieval applications can assist radiologists by identifying similar images in archives as a means to providing decision support. In the classical case, images are described using low-level features extracted from their contents, and an appropriate distance is used to find the best matches in the feature space. However, using low-level image features to fully capture the visual appearance of diseases is challenging and the semantic gap between these features and the high-level visual concepts in radiology may impair the system performance. To deal with this issue, the use of semantic terms to provide high-level descriptions of radiological image contents has recently been advocated. Nevertheless, most of the existing semantic image retrieval strategies are limited by two factors: they require manual annotation of the images using semantic terms and they ignore the intrinsic visual and semantic relationships between these annotations during the comparison of the images. Based on these considerations, we propose an image retrieval framework based on semantic features that relies on two main strategies: (1) automatic “soft” prediction of ontological terms that describe the image contents from multi-scale Riesz wavelets and (2) retrieval of similar images by evaluating the similarity between their annotations using a new term dissimilarity measure, which takes into account both image-based and ontological term relations. The combination of these strategies provides a means of accurately retrieving similar images in databases based on image annotations and can be considered as a potential solution to the semantic gap problem. We validated this approach in the context of the retrieval of liver lesions from computed tomographic (CT) images and annotated with semantic terms of the RadLex ontology. The relevance of the retrieval results was assessed using two protocols: evaluation relative to a dissimilarity reference standard defined for pairs of images on a 25-images dataset, and evaluation relative to the diagnoses of the retrieved images on a 72-images dataset. A normalized discounted cumulative gain (NDCG) score of more than 0.92 was obtained with the first protocol, while AUC scores of more than 0.77 were obtained with the second protocol. This automatical approach could provide real-time decision support to radiologists by showing them similar images with associated diagnoses and, where available, responses to therapies. PMID:25036769
Individual Differences in Working Memory Capacity Predict Retrieval-Induced Forgetting
ERIC Educational Resources Information Center
Aslan, Alp; Bauml, Karl-Heinz T.
2011-01-01
Selectively retrieving a subset of previously studied information enhances memory for the retrieved information but causes forgetting of related, nonretrieved information. Such retrieval-induced forgetting (RIF) has often been attributed to inhibitory executive-control processes that supposedly suppress the nonretrieved items' memory…
ERIC Educational Resources Information Center
Lynch, Michael F.; Willett, Peter
1987-01-01
Discusses research into chemical information and document retrieval systems at the University of Sheffield. Highlights include the use of cluster analysis methods for document retrieval and drug design, representation and searching of files of generic chemical structures, and the application of parallel computer hardware to information retrieval.…
Coupled binary embedding for large-scale image retrieval.
Zheng, Liang; Wang, Shengjin; Tian, Qi
2014-08-01
Visual matching is a crucial step in image retrieval based on the bag-of-words (BoW) model. In the baseline method, two keypoints are considered as a matching pair if their SIFT descriptors are quantized to the same visual word. However, the SIFT visual word has two limitations. First, it loses most of its discriminative power during quantization. Second, SIFT only describes the local texture feature. Both drawbacks impair the discriminative power of the BoW model and lead to false positive matches. To tackle this problem, this paper proposes to embed multiple binary features at indexing level. To model correlation between features, a multi-IDF scheme is introduced, through which different binary features are coupled into the inverted file. We show that matching verification methods based on binary features, such as Hamming embedding, can be effectively incorporated in our framework. As an extension, we explore the fusion of binary color feature into image retrieval. The joint integration of the SIFT visual word and binary features greatly enhances the precision of visual matching, reducing the impact of false positive matches. Our method is evaluated through extensive experiments on four benchmark datasets (Ukbench, Holidays, DupImage, and MIR Flickr 1M). We show that our method significantly improves the baseline approach. In addition, large-scale experiments indicate that the proposed method requires acceptable memory usage and query time compared with other approaches. Further, when global color feature is integrated, our method yields competitive performance with the state-of-the-arts.
Separate Capacities for Storing Different Features in Visual Working Memory
ERIC Educational Resources Information Center
Wang, Benchi; Cao, Xiaohua; Theeuwes, Jan; Olivers, Christian N. L.; Wang, Zhiguo
2017-01-01
Recent empirical and theoretical work suggests that visual features such as color and orientation can be stored or retrieved independently in visual working memory (VWM), even in cases when they belong to the same object. Yet it remains unclear whether different feature dimensions have their own capacity limits, or whether they compete for shared…
Strategic search from long-term memory: an examination of semantic and autobiographical recall.
Unsworth, Nash; Brewer, Gene A; Spillers, Gregory J
2014-01-01
Searching long-term memory is theoretically driven by both directed (search strategies) and random components. In the current study we conducted four experiments evaluating strategic search in semantic and autobiographical memory. Participants were required to generate either exemplars from the category of animals or the names of their friends for several minutes. Self-reported strategies suggested that participants typically relied on visualization strategies for both tasks and were less likely to rely on ordered strategies (e.g., alphabetic search). When participants were instructed to use particular strategies, the visualization strategy resulted in the highest levels of performance and the most efficient search, whereas ordered strategies resulted in the lowest levels of performance and fairly inefficient search. These results are consistent with the notion that retrieval from long-term memory is driven, in part, by search strategies employed by the individual, and that one particularly efficient strategy is to visualize various situational contexts that one has experienced in the past in order to constrain the search and generate the desired information.
VisualUrText: A Text Analytics Tool for Unstructured Textual Data
NASA Astrophysics Data System (ADS)
Zainol, Zuraini; Jaymes, Mohd T. H.; Nohuddin, Puteri N. E.
2018-05-01
The growing amount of unstructured text over Internet is tremendous. Text repositories come from Web 2.0, business intelligence and social networking applications. It is also believed that 80-90% of future growth data is available in the form of unstructured text databases that may potentially contain interesting patterns and trends. Text Mining is well known technique for discovering interesting patterns and trends which are non-trivial knowledge from massive unstructured text data. Text Mining covers multidisciplinary fields involving information retrieval (IR), text analysis, natural language processing (NLP), data mining, machine learning statistics and computational linguistics. This paper discusses the development of text analytics tool that is proficient in extracting, processing, analyzing the unstructured text data and visualizing cleaned text data into multiple forms such as Document Term Matrix (DTM), Frequency Graph, Network Analysis Graph, Word Cloud and Dendogram. This tool, VisualUrText, is developed to assist students and researchers for extracting interesting patterns and trends in document analyses.
Memory for product sounds: the effect of sound and label type.
Ozcan, Elif; van Egmond, René
2007-11-01
The (mnemonic) interactions between auditory, visual, and the semantic systems have been investigated using structurally complex auditory stimuli (i.e., product sounds). Six types of product sounds (air, alarm, cyclic, impact, liquid, mechanical) that vary in spectral-temporal structure were presented in four label type conditions: self-generated text, text, image, and pictogram. A memory paradigm that incorporated free recall, recognition, and matching tasks was employed. The results for the sound type suggest that the amount of spectral-temporal structure in a sound can be indicative for memory performance. Findings related to label type suggest that 'self' creates a strong bias for the retrieval and the recognition of sounds that were self-labeled; the density and the complexity of the visual information (i.e., pictograms) hinders the memory performance ('visual' overshadowing effect); and image labeling has an additive effect on the recall and matching tasks (dual coding). Thus, the findings suggest that the memory performances for product sounds are task-dependent.
Contextual Information Drives the Reconsolidation-Dependent Updating of Retrieved Fear Memories
Jarome, Timothy J; Ferrara, Nicole C; Kwapis, Janine L; Helmstetter, Fred J
2015-01-01
Stored memories enter a temporary state of vulnerability following retrieval known as ‘reconsolidation', a process that can allow memories to be modified to incorporate new information. Although reconsolidation has become an attractive target for treatment of memories related to traumatic past experiences, we still do not know what new information triggers the updating of retrieved memories. Here, we used biochemical markers of synaptic plasticity in combination with a novel behavioral procedure to determine what was learned during memory reconsolidation under normal retrieval conditions. We eliminated new information during retrieval by manipulating animals' training experience and measured changes in proteasome activity and GluR2 expression in the amygdala, two established markers of fear memory lability and reconsolidation. We found that eliminating new contextual information during the retrieval of memories for predictable and unpredictable fear associations prevented changes in proteasome activity and glutamate receptor expression in the amygdala, indicating that this new information drives the reconsolidation of both predictable and unpredictable fear associations on retrieval. Consistent with this, eliminating new contextual information prior to retrieval prevented the memory-impairing effects of protein synthesis inhibitors following retrieval. These results indicate that under normal conditions, reconsolidation updates memories by incorporating new contextual information into the memory trace. Collectively, these results suggest that controlling contextual information present during retrieval may be a useful strategy for improving reconsolidation-based treatments of traumatic memories associated with anxiety disorders such as post-traumatic stress disorder. PMID:26062788
ERIC Educational Resources Information Center
Lehman, Melissa; Smith, Megan A.; Karpicke, Jeffrey D.
2014-01-01
We tested the predictions of 2 explanations for retrieval-based learning; while the elaborative retrieval hypothesis assumes that the retrieval of studied information promotes the generation of semantically related information, which aids in later retrieval (Carpenter, 2009), the episodic context account proposed by Karpicke, Lehman, and Aue (in…
Using Induction to Refine Information Retrieval Strategies
NASA Technical Reports Server (NTRS)
Baudin, Catherine; Pell, Barney; Kedar, Smadar
1994-01-01
Conceptual information retrieval systems use structured document indices, domain knowledge and a set of heuristic retrieval strategies to match user queries with a set of indices describing the document's content. Such retrieval strategies increase the set of relevant documents retrieved (increase recall), but at the expense of returning additional irrelevant documents (decrease precision). Usually in conceptual information retrieval systems this tradeoff is managed by hand and with difficulty. This paper discusses ways of managing this tradeoff by the application of standard induction algorithms to refine the retrieval strategies in an engineering design domain. We gathered examples of query/retrieval pairs during the system's operation using feedback from a user on the retrieved information. We then fed these examples to the induction algorithm and generated decision trees that refine the existing set of retrieval strategies. We found that (1) induction improved the precision on a set of queries generated by another user, without a significant loss in recall, and (2) in an interactive mode, the decision trees pointed out flaws in the retrieval and indexing knowledge and suggested ways to refine the retrieval strategies.
Hypothesis-confirming information search strategies and computerized information-retrieval systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jacobs, S.M.
A recent trend in information-retrieval systems technology is the development of on-line information retrieval systems. One objective of these systems has been to attempt to enhance decision effectiveness by allowing users to preferentially seek information, thereby facilitating the reduction or elimination of information overload. These systems do not necessarily lead to more-effective decision making, however. Recent research in information-search strategy suggests that when users are seeking information subsequent to forming initial beliefs, they may preferentially seek information to confirm these beliefs. It seems that effective computer-based decision support requires an information retrieval system capable of: (a) retrieving a subset ofmore » all available information, in order to reduce information overload, and (b) supporting an information search strategy that considers all relevant information, rather than merely hypothesis-confirming information. An information retrieval system with an expert component (i.e., a knowledge-based DSS) should be able to provide these capabilities. Results of this study are non conclusive; there was neither strong confirmatory evidence nor strong disconfirmatory evidence regarding the effectiveness of the KBDSS.« less
46 CFR 520.6 - Retrieval of information.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 46 Shipping 9 2012-10-01 2012-10-01 false Retrieval of information. 520.6 Section 520.6 Shipping FEDERAL MARITIME COMMISSION REGULATIONS AFFECTING OCEAN SHIPPING IN FOREIGN COMMERCE CARRIER AUTOMATED TARIFFS § 520.6 Retrieval of information. (a) General. Tariffs systems shall present retrievers with the...
46 CFR 520.6 - Retrieval of information.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 46 Shipping 9 2010-10-01 2010-10-01 false Retrieval of information. 520.6 Section 520.6 Shipping FEDERAL MARITIME COMMISSION REGULATIONS AFFECTING OCEAN SHIPPING IN FOREIGN COMMERCE CARRIER AUTOMATED TARIFFS § 520.6 Retrieval of information. (a) General. Tariffs systems shall present retrievers with the...
46 CFR 520.6 - Retrieval of information.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 46 Shipping 9 2014-10-01 2014-10-01 false Retrieval of information. 520.6 Section 520.6 Shipping FEDERAL MARITIME COMMISSION REGULATIONS AFFECTING OCEAN SHIPPING IN FOREIGN COMMERCE CARRIER AUTOMATED TARIFFS § 520.6 Retrieval of information. (a) General. Tariffs systems shall present retrievers with the...
46 CFR 520.6 - Retrieval of information.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 46 Shipping 9 2013-10-01 2013-10-01 false Retrieval of information. 520.6 Section 520.6 Shipping FEDERAL MARITIME COMMISSION REGULATIONS AFFECTING OCEAN SHIPPING IN FOREIGN COMMERCE CARRIER AUTOMATED TARIFFS § 520.6 Retrieval of information. (a) General. Tariffs systems shall present retrievers with the...
ERIC Educational Resources Information Center
Crestani, Fabio; Dominich, Sandor; Lalmas, Mounia; van Rijsbergen, Cornelis Joost
2003-01-01
Discusses the importance of research on the use of mathematical, logical, and formal methods in information retrieval to help enhance retrieval effectiveness and clarify underlying concepts of information retrieval. Highlights include logic; probability; spaces; and future research needs. (Author/LRW)
46 CFR 520.6 - Retrieval of information.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 46 Shipping 9 2011-10-01 2011-10-01 false Retrieval of information. 520.6 Section 520.6 Shipping FEDERAL MARITIME COMMISSION REGULATIONS AFFECTING OCEAN SHIPPING IN FOREIGN COMMERCE CARRIER AUTOMATED TARIFFS § 520.6 Retrieval of information. (a) General. Tariffs systems shall present retrievers with the...
Information Retrieval: A Sequential Learning Process.
ERIC Educational Resources Information Center
Bookstein, Abraham
1983-01-01
Presents decision-theoretic models which intrinsically include retrieval of multiple documents whereby system responds to request by presenting documents to patron in sequence, gathering feedback, and using information to modify future retrievals. Document independence model, set retrieval model, sequential retrieval model, learning model,…
Sound effects: Multimodal input helps infants find displaced objects.
Shinskey, Jeanne L
2017-09-01
Before 9 months, infants use sound to retrieve a stationary object hidden by darkness but not one hidden by occlusion, suggesting auditory input is more salient in the absence of visual input. This article addresses how audiovisual input affects 10-month-olds' search for displaced objects. In AB tasks, infants who previously retrieved an object at A subsequently fail to find it after it is displaced to B, especially following a delay between hiding and retrieval. Experiment 1 manipulated auditory input by keeping the hidden object audible versus silent, and visual input by presenting the delay in the light versus dark. Infants succeeded more at B with audible than silent objects and, unexpectedly, more after delays in the light than dark. Experiment 2 presented both the delay and search phases in darkness. The unexpected light-dark difference disappeared. Across experiments, the presence of auditory input helped infants find displaced objects, whereas the absence of visual input did not. Sound might help by strengthening object representation, reducing memory load, or focusing attention. This work provides new evidence on when bimodal input aids object processing, corroborates claims that audiovisual processing improves over the first year of life, and contributes to multisensory approaches to studying cognition. Statement of contribution What is already known on this subject Before 9 months, infants use sound to retrieve a stationary object hidden by darkness but not one hidden by occlusion. This suggests they find auditory input more salient in the absence of visual input in simple search tasks. After 9 months, infants' object processing appears more sensitive to multimodal (e.g., audiovisual) input. What does this study add? This study tested how audiovisual input affects 10-month-olds' search for an object displaced in an AB task. Sound helped infants find displaced objects in both the presence and absence of visual input. Object processing becomes more sensitive to bimodal input as multisensory functions develop across the first year. © 2016 The British Psychological Society.
Sewell, David K; Lilburn, Simon D; Smith, Philip L
2016-11-01
A central question in working memory research concerns the degree to which information in working memory is accessible to other cognitive processes (e.g., decision-making). Theories assuming that the focus of attention can only store a single object at a time require the focus to orient to a target representation before further processing can occur. The need to orient the focus of attention implies that single-object accounts typically predict response time costs associated with object selection even when working memory is not full (i.e., memory load is less than 4 items). For other theories that assume storage of multiple items in the focus of attention, predictions depend on specific assumptions about the way resources are allocated among items held in the focus, and how this affects the time course of retrieval of items from the focus. These broad theoretical accounts have been difficult to distinguish because conventional analyses fail to separate components of empirical response times related to decision-making from components related to selection and retrieval processes associated with accessing information in working memory. To better distinguish these response time components from one another, we analyze data from a probed visual working memory task using extensions of the diffusion decision model. Analysis of model parameters revealed that increases in memory load resulted in (a) reductions in the quality of the underlying stimulus representations in a manner consistent with a sample size model of visual working memory capacity and (b) systematic increases in the time needed to selectively access a probed representation in memory. The results are consistent with single-object theories of the focus of attention. The results are also consistent with a subset of theories that assume a multiobject focus of attention in which resource allocation diminishes both the quality and accessibility of the underlying representations. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Reviewing or Retrieving: What Activity Best Promotes Long-Term Retention?
ERIC Educational Resources Information Center
Lindgren, Paul D.
2012-01-01
Research studies repeatedly emphasize the importance of vocabulary capabilities to a large variety of academic activities. This study compared a learning strategy that exclusively involved the visual review of vocabulary word-definition pairs to a strategy that, in addition, prompted participants to attempt free-recall retrieval of words to match…
MATRIS Indexing and Retrieval Thesaurus (MIRT): Keyword Out of Context (KWOC)
1994-08-01
ESTEEM Self ESTEEM ... esteem Fiqcq SELF -ASSESSMENT SELF -ASSESSMENT Geqc SELF -ASSESSMENT tests Vqq SELF -PACED SELF -PACED instruction Ehga I SELF -STUDY SELF -STUDY aids Elm...AIDS Mh Self -study AIDS Elm Skill development AIDS Elk Retrieval AIDS Yc Visual AIDS Yg Training AIDS / materials effectiveness Ewkm Training
ASIST 2001. Information in a Networked World: Harnessing the Flow. Part III: Poster Presentations.
ERIC Educational Resources Information Center
Proceedings of the ASIST Annual Meeting, 2001
2001-01-01
Topics of Poster Presentations include: electronic preprints; intranets; poster session abstracts; metadata; information retrieval; watermark images; video games; distributed information retrieval; subject domain knowledge; data mining; information theory; course development; historians' use of pictorial images; information retrieval software;…
A diagram retrieval method with multi-label learning
NASA Astrophysics Data System (ADS)
Fu, Songping; Lu, Xiaoqing; Liu, Lu; Qu, Jingwei; Tang, Zhi
2015-01-01
In recent years, the retrieval of plane geometry figures (PGFs) has attracted increasing attention in the fields of mathematics education and computer science. However, the high cost of matching complex PGF features leads to the low efficiency of most retrieval systems. This paper proposes an indirect classification method based on multi-label learning, which improves retrieval efficiency by reducing the scope of compare operation from the whole database to small candidate groups. Label correlations among PGFs are taken into account for the multi-label classification task. The primitive feature selection for multi-label learning and the feature description of visual geometric elements are conducted individually to match similar PGFs. The experiment results show the competitive performance of the proposed method compared with existing PGF retrieval methods in terms of both time consumption and retrieval quality.
A semantic medical multimedia retrieval approach using ontology information hiding.
Guo, Kehua; Zhang, Shigeng
2013-01-01
Searching useful information from unstructured medical multimedia data has been a difficult problem in information retrieval. This paper reports an effective semantic medical multimedia retrieval approach which can reflect the users' query intent. Firstly, semantic annotations will be given to the multimedia documents in the medical multimedia database. Secondly, the ontology that represented semantic information will be hidden in the head of the multimedia documents. The main innovations of this approach are cross-type retrieval support and semantic information preservation. Experimental results indicate a good precision and efficiency of our approach for medical multimedia retrieval in comparison with some traditional approaches.
A Prototype System for Retrieval of Gene Functional Information
Folk, Lillian C.; Patrick, Timothy B.; Pattison, James S.; Wolfinger, Russell D.; Mitchell, Joyce A.
2003-01-01
Microarrays allow researchers to gather data about the expression patterns of thousands of genes simultaneously. Statistical analysis can reveal which genes show statistically significant results. Making biological sense of those results requires the retrieval of functional information about the genes thus identified, typically a manual gene-by-gene retrieval of information from various on-line databases. For experiments generating thousands of genes of interest, retrieval of functional information can become a significant bottleneck. To address this issue, we are currently developing a prototype system to automate the process of retrieval of functional information from multiple on-line sources. PMID:14728346
Digital Image Access & Retrieval.
ERIC Educational Resources Information Center
Heidorn, P. Bryan, Ed.; Sandore, Beth, Ed.
Recent technological advances in computing and digital imaging technology have had immediate and permanent consequences for visual resource collections. Libraries are involved in organizing and managing large visual resource collections. The central challenges in working with digital image collections mirror those that libraries have sought to…
Mann, G; Birkmann, C; Schmidt, T; Schaeffler, V
1999-01-01
Introduction Present solutions for the representation and retrieval of medical information from online sources are not very satisfying. Either the retrieval process lacks of precision and completeness the representation does not support the update and maintenance of the represented information. Most efforts are currently put into improving the combination of search engines and HTML based documents. However, due to the current shortcomings of methods for natural language understanding there are clear limitations to this approach. Furthermore, this approach does not solve the maintenance problem. At least medical information exceeding a certain complexity seems to afford approaches that rely on structured knowledge representation and corresponding retrieval mechanisms. Methods Knowledge-based information systems are based on the following fundamental ideas. The representation of information is based on ontologies that define the structure of the domain's concepts and their relations. Views on domain models are defined and represented as retrieval schemata. Retrieval schemata can be interpreted as canonical query types focussing on specific aspects of the provided information (e.g. diagnosis or therapy centred views). Based on these retrieval schemata it can be decided which parts of the information in the domain model must be represented explicitly and formalised to support the retrieval process. As representation language propositional logic is used. All other information can be represented in a structured but informal way using text, images etc. Layout schemata are used to assign layout information to retrieved domain concepts. Depending on the target environment HTML or XML can be used. Results Based on this approach two knowledge-based information systems have been developed. The 'Ophthalmologic Knowledge-based Information System for Diabetic Retinopathy' (OKIS-DR) provides information on diagnoses, findings, examinations, guidelines, and reference images related to diabetic retinopathy. OKIS-DR uses combinations of findings to specify the information that must be retrieved. The second system focuses on nutrition related allergies and intolerances. Information on allergies and intolerances of a patient are used to retrieve general information on the specified combination of allergies and intolerances. As a special feature the system generates tables showing food types and products that are tolerated or not tolerated by patients. Evaluation by external experts and user groups showed that the described approach of knowledge-based information systems increases the precision and completeness of knowledge retrieval. Due to the structured and non-redundant representation of information the maintenance and update of the information can be simplified. Both systems are available as WWW based online knowledge bases and CD-ROMs (cf. http://mta.gsf.de topic: products).
Encoding context and false recognition memories.
Bruce, Darryl; Phillips-Grant, Kimberly; Conrad, Nicole; Bona, Susan
2004-09-01
False recognition of an extralist word that is thematically related to all words of a study list may reflect internal activation of the theme word during encoding followed by impaired source monitoring at retrieval, that is, difficulty in determining whether the word had actually been experienced or merely thought of. To assist source monitoring, distinctive visual or verbal contexts were added to study words at input. Both types of context produced similar effects: False alarms to theme-word (critical) lures were reduced; remember judgements of critical lures called old were lower; and if contextual information had been added to lists, subjects indicated as much for list items and associated critical foils identified as old. The visual and verbal contexts used in the present studies were held to disrupt semantic categorisation of list words at input and to facilitate source monitoring at output.