Matti, J.C.; Morton, D.M.; Langenheim, V.E.
2015-01-01
Geologic information contained in the El Casco database is general-purpose data applicable to land-related investigations in the earth and biological sciences. The term “general-purpose” means that all geologic-feature classes have minimal information content adequate to characterize their general geologic characteristics and to interpret their general geologic history. However, no single feature class has enough information to definitively characterize its properties and origin. For this reason the database cannot be used for site-specific geologic evaluations, although it can be used to plan and guide investigations at the site-specific level.
Feature Analysis of Generalized Data Base Management Systems.
ERIC Educational Resources Information Center
Conference on Data Systems Languages, Monroeville, PA. Systems Committee.
A more complete definition of the features offered in present day generalized data base management systems is provided by this second technical report of the CODASYL Systems Committee. In a tutorial format, each feature description is followed by either narrative information covering ten systems or by a table for all systems. The ten systems…
Generalized Feature Extraction for Wrist Pulse Analysis: From 1-D Time Series to 2-D Matrix.
Dimin Wang; Zhang, David; Guangming Lu
2017-07-01
Traditional Chinese pulse diagnosis, known as an empirical science, depends on the subjective experience. Inconsistent diagnostic results may be obtained among different practitioners. A scientific way of studying the pulse should be to analyze the objectified wrist pulse waveforms. In recent years, many pulse acquisition platforms have been developed with the advances in sensor and computer technology. And the pulse diagnosis using pattern recognition theories is also increasingly attracting attentions. Though many literatures on pulse feature extraction have been published, they just handle the pulse signals as simple 1-D time series and ignore the information within the class. This paper presents a generalized method of pulse feature extraction, extending the feature dimension from 1-D time series to 2-D matrix. The conventional wrist pulse features correspond to a particular case of the generalized models. The proposed method is validated through pattern classification on actual pulse records. Both quantitative and qualitative results relative to the 1-D pulse features are given through diabetes diagnosis. The experimental results show that the generalized 2-D matrix feature is effective in extracting both the periodic and nonperiodic information. And it is practical for wrist pulse analysis.
Trajectory analysis via a geometric feature space approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rintoul, Mark D.; Wilson, Andrew T.
This study aimed to organize a body of trajectories in order to identify, search for and classify both common and uncommon behaviors among objects such as aircraft and ships. Existing comparison functions such as the Fréchet distance are computationally expensive and yield counterintuitive results in some cases. We propose an approach using feature vectors whose components represent succinctly the salient information in trajectories. These features incorporate basic information such as the total distance traveled and the distance between start/stop points as well as geometric features related to the properties of the convex hull, trajectory curvature and general distance geometry. Additionally,more » these features can generally be mapped easily to behaviors of interest to humans who are searching large databases. Most of these geometric features are invariant under rigid transformation. Furthermore, we demonstrate the use of different subsets of these features to identify trajectories similar to an exemplar, cluster a database of several hundred thousand trajectories and identify outliers.« less
Trajectory analysis via a geometric feature space approach
Rintoul, Mark D.; Wilson, Andrew T.
2015-10-05
This study aimed to organize a body of trajectories in order to identify, search for and classify both common and uncommon behaviors among objects such as aircraft and ships. Existing comparison functions such as the Fréchet distance are computationally expensive and yield counterintuitive results in some cases. We propose an approach using feature vectors whose components represent succinctly the salient information in trajectories. These features incorporate basic information such as the total distance traveled and the distance between start/stop points as well as geometric features related to the properties of the convex hull, trajectory curvature and general distance geometry. Additionally,more » these features can generally be mapped easily to behaviors of interest to humans who are searching large databases. Most of these geometric features are invariant under rigid transformation. Furthermore, we demonstrate the use of different subsets of these features to identify trajectories similar to an exemplar, cluster a database of several hundred thousand trajectories and identify outliers.« less
A feature-based inference model of numerical estimation: the split-seed effect.
Murray, Kyle B; Brown, Norman R
2009-07-01
Prior research has identified two modes of quantitative estimation: numerical retrieval and ordinal conversion. In this paper we introduce a third mode, which operates by a feature-based inference process. In contrast to prior research, the results of three experiments demonstrate that people estimate automobile prices by combining metric information associated with two critical features: product class and brand status. In addition, Experiments 2 and 3 demonstrated that when participants are seeded with the actual current base price of one of the to-be-estimated vehicles, they respond by revising the general metric and splitting the information carried by the seed between the two critical features. As a result, the degree of post-seeding revision is directly related to the number of these features that the seed and the transfer items have in common. The paper concludes with a general discussion of the practical and theoretical implications of our findings.
A Comparative Evaluation of Videodiscs for General Biology.
ERIC Educational Resources Information Center
Ralph, Charles L.
1995-01-01
Provides a brief profile of the currently available videodiscs for general biology, with comparable information for each. An introduction discusses benefits and problems associated with videodisc use in the classroom. Profiles contain information on description, good and bad features, still images, animations and movies, audio, software,…
Marques, J Frederico
2007-12-01
The deterioration of semantic memory usually proceeds from more specific to more general superordinate categories, although rarer cases of superordinate knowledge impairment have also been reported. The nature of superordinate knowledge and the explanation of these two semantic impairments were evaluated from the analysis of superordinate and basic-level feature norms. The results show that, in comparison to basic-level concepts, superordinate concepts are not generally less informative and have similar feature distinctiveness and proportion of individual sensory features, but their features are less shared by their members. Results are in accord with explanations based on feature connection weights and/or concept confusability for the superordinate advantage cases. Results especially support an explanation for superordinate impairments in terms of higher semantic control requirements as related to features being less shared between concept members. Implications for patients with semantic impairments are also discussed.
The effect of feature selection methods on computer-aided detection of masses in mammograms
NASA Astrophysics Data System (ADS)
Hupse, Rianne; Karssemeijer, Nico
2010-05-01
In computer-aided diagnosis (CAD) research, feature selection methods are often used to improve generalization performance of classifiers and shorten computation times. In an application that detects malignant masses in mammograms, we investigated the effect of using a selection criterion that is similar to the final performance measure we are optimizing, namely the mean sensitivity of the system in a predefined range of the free-response receiver operating characteristics (FROC). To obtain the generalization performance of the selected feature subsets, a cross validation procedure was performed on a dataset containing 351 abnormal and 7879 normal regions, each region providing a set of 71 mass features. The same number of noise features, not containing any information, were added to investigate the ability of the feature selection algorithms to distinguish between useful and non-useful features. It was found that significantly higher performances were obtained using feature sets selected by the general test statistic Wilks' lambda than using feature sets selected by the more specific FROC measure. Feature selection leads to better performance when compared to a system in which all features were used.
A survey of mass analyzers. [characteristics and features of various instruments and techniques
NASA Technical Reports Server (NTRS)
Moore, W. W., Jr.; Tashbar, P. W.
1973-01-01
With the increasing applications of mass spectrometry technology to diverse services areas, a need has developed for a consolidated survey of the essential characteristics and features of the various instruments and techniques. This report is one approach to satisfying this need. Information has been collected and consolidated into a format which includes for each approach: (1) a general technique description, (2) instrument features information, and (3) a summary of pertinent advantages and disadvantages. With this information, the potential mass spectrometer user should be able to more efficiently select the most appropriate instrument.
Technique and cue selection for graphical presentation of generic hyperdimensional data
NASA Astrophysics Data System (ADS)
Howard, Lee M.; Burton, Robert P.
2013-12-01
Several presentation techniques have been created for visualization of data with more than three variables. Packages have been written, each of which implements a subset of these techniques. However, these packages generally fail to provide all the features needed by the user during the visualization process. Further, packages generally limit support for presentation techniques to a few techniques. A new package called Petrichor accommodates all necessary and useful features together in one system. Any presentation technique may be added easily through an extensible plugin system. Features are supported by a user interface that allows easy interaction with data. Annotations allow users to mark up visualizations and share information with others. By providing a hyperdimensional graphics package that easily accommodates presentation techniques and includes a complete set of features, including those that are rarely or never supported elsewhere, the user is provided with a tool that facilitates improved interaction with multivariate data to extract and disseminate information.
Hajdukiewicz, John R; Vicente, Kim J
2002-01-01
Ecological interface design (EID) is a theoretical framework that aims to support worker adaptation to change and novelty in complex systems. Previous evaluations of EID have emphasized representativeness to enhance generalizability of results to operational settings. The research presented here is complementary, emphasizing experimental control to enhance theory building. Two experiments were conducted to test the impact of functional information and emergent feature graphics on adaptation to novelty and change in a thermal-hydraulic process control microworld. Presenting functional information in an interface using emergent features encouraged experienced participants to become perceptually coupled to the interface and thereby to exhibit higher-level control and more successful adaptation to unanticipated events. The absence of functional information or of emergent features generally led to lower-level control and less success at adaptation, the exception being a minority of participants who compensated by relying on analytical reasoning. These findings may have practical implications for shaping coordination in complex systems and fundamental implications for the development of a general unified theory of coordination for the technical, human, and social sciences. Actual or potential applications of this research include the design of human-computer interfaces that improve safety in complex sociotechnical systems.
Task-relevant perceptual features can define categories in visual memory too.
Antonelli, Karla B; Williams, Carrick C
2017-11-01
Although Konkle, Brady, Alvarez, and Oliva (2010, Journal of Experimental Psychology: General, 139(3), 558) claim that visual long-term memory (VLTM) is organized on underlying conceptual, not perceptual, information, visual memory results from visual search tasks are not well explained by this theory. We hypothesized that when viewing an object, any task-relevant visual information is critical to the organizational structure of VLTM. In two experiments, we examined the organization of VLTM by measuring the amount of retroactive interference created by objects possessing different combinations of task-relevant features. Based on task instructions, only the conceptual category was task relevant or both the conceptual category and a perceptual object feature were task relevant. Findings indicated that when made task relevant, perceptual object feature information, along with conceptual category information, could affect memory organization for objects in VLTM. However, when perceptual object feature information was task irrelevant, it did not contribute to memory organization; instead, memory defaulted to being organized around conceptual category information. These findings support the theory that a task-defined organizational structure is created in VLTM based on the relevance of particular object features and information.
Approaches to defining reference regimes for river restoration planning
NASA Astrophysics Data System (ADS)
Beechie, T. J.
2014-12-01
Reference conditions or reference regimes can be defined using three general approaches, historical analysis, contemporary reference sites, and theoretical or empirical models. For large features (e.g., floodplain channels and ponds) historical data and maps are generally reliable. For smaller features (e.g., pools and riffles in small tributaries), field data from contemporary reference sites are a reasonable surrogate for historical data. Models are generally used for features that have no historical information or present day reference sites (e.g., beaver pond habitat). Each of these approaches contributes to a watershed-wide understanding of current biophysical conditions relative to potential conditions, which helps create not only a guiding vision for restoration, but also helps quantify and locate the largest or most important restoration opportunities. Common uses of geomorphic and biological reference conditions include identifying key areas for habitat protection or restoration, and informing the choice of restoration targets. Examples of use of each of these three approaches to define reference regimes in western USA illustrate how historical information and current research highlight key restoration opportunities, focus restoration effort in areas that can produce the largest ecological benefit, and contribute to estimating restoration potential and assessing likelihood of achieving restoration goals.
ANALYSIS OF CLINICAL AND DERMOSCOPIC FEATURES FOR BASAL CELL CARCINOMA NEURAL NETWORK CLASSIFICATION
Cheng, Beibei; Stanley, R. Joe; Stoecker, William V; Stricklin, Sherea M.; Hinton, Kristen A.; Nguyen, Thanh K.; Rader, Ryan K.; Rabinovitz, Harold S.; Oliviero, Margaret; Moss, Randy H.
2012-01-01
Background Basal cell carcinoma (BCC) is the most commonly diagnosed cancer in the United States. In this research, we examine four different feature categories used for diagnostic decisions, including patient personal profile (patient age, gender, etc.), general exam (lesion size and location), common dermoscopic (blue-gray ovoids, leaf-structure dirt trails, etc.), and specific dermoscopic lesion (white/pink areas, semitranslucency, etc.). Specific dermoscopic features are more restricted versions of the common dermoscopic features. Methods Combinations of the four feature categories are analyzed over a data set of 700 lesions, with 350 BCCs and 350 benign lesions, for lesion discrimination using neural network-based techniques, including Evolving Artificial Neural Networks and Evolving Artificial Neural Network Ensembles. Results Experiment results based on ten-fold cross validation for training and testing the different neural network-based techniques yielded an area under the receiver operating characteristic curve as high as 0.981 when all features were combined. The common dermoscopic lesion features generally yielded higher discrimination results than other individual feature categories. Conclusions Experimental results show that combining clinical and image information provides enhanced lesion discrimination capability over either information source separately. This research highlights the potential of data fusion as a model for the diagnostic process. PMID:22724561
Secondary School Design: Workshop Crafts.
ERIC Educational Resources Information Center
Department of Education and Science, London (England).
Design features are described for school shop facilities. Some general requirements common to most workshops are discussed; and specific design information is provided for general woodwork, general metalwork, and combined wood and metalwork facilities. The grouping of the workshop crafts and their relation to other parts of the school are also…
Longitudinal Validation of General and Specific Structural Features of Personality Pathology
Wright, Aidan G.C.; Hopwood, Christopher J.; Skodol, Andrew E.; Morey, Leslie C.
2016-01-01
Theorists have long argued that personality disorder (PD) is best understood in terms of general impairments shared across the disorders as well as more specific instantiations of pathology. A model based on this theoretical structure was proposed as part of the DSM-5 revision process. However, only recently has this structure been subjected to formal quantitative evaluation, with little in the way of validation efforts via external correlates or prospective longitudinal prediction. We used the Collaborative Longitudinal Study of Personality Disorders dataset to: (1) estimate structural models that parse general from specific variance in personality disorder features, (2) examine patterns of growth in general and specific features over the course of 10 years, and (3) establish concurrent and dynamic longitudinal associations in PD features and a host of external validators including basic personality traits and psychosocial functioning scales. We found that general PD exhibited much lower absolute stability and was most strongly related to broad markers of psychosocial functioning, concurrently and longitudinally, whereas specific features had much higher mean stability and exhibited more circumscribed associations with functioning. However, both general and specific factors showed recognizable associations with normative and pathological traits. These results can inform efforts to refine the conceptualization and diagnosis of personality pathology. PMID:27819472
Learning to rank diversified results for biomedical information retrieval from multiple features.
Wu, Jiajin; Huang, Jimmy; Ye, Zheng
2014-01-01
Different from traditional information retrieval (IR), promoting diversity in IR takes consideration of relationship between documents in order to promote novelty and reduce redundancy thus to provide diversified results to satisfy various user intents. Diversity IR in biomedical domain is especially important as biologists sometimes want diversified results pertinent to their query. A combined learning-to-rank (LTR) framework is learned through a general ranking model (gLTR) and a diversity-biased model. The former is learned from general ranking features by a conventional learning-to-rank approach; the latter is constructed with diversity-indicating features added, which are extracted based on the retrieved passages' topics detected using Wikipedia and ranking order produced by the general learning-to-rank model; final ranking results are given by combination of both models. Compared with baselines BM25 and DirKL on 2006 and 2007 collections, the gLTR has 0.2292 (+16.23% and +44.1% improvement over BM25 and DirKL respectively) and 0.1873 (+15.78% and +39.0% improvement over BM25 and DirKL respectively) in terms of aspect level of mean average precision (Aspect MAP). The LTR method outperforms gLTR on 2006 and 2007 collections with 4.7% and 2.4% improvement in terms of Aspect MAP. The learning-to-rank method is an efficient way for biomedical information retrieval and the diversity-biased features are beneficial for promoting diversity in ranking results.
A feature selection approach towards progressive vector transmission over the Internet
NASA Astrophysics Data System (ADS)
Miao, Ru; Song, Jia; Feng, Min
2017-09-01
WebGIS has been applied for visualizing and sharing geospatial information popularly over the Internet. In order to improve the efficiency of the client applications, the web-based progressive vector transmission approach is proposed. Important features should be selected and transferred firstly, and the methods for measuring the importance of features should be further considered in the progressive transmission. However, studies on progressive transmission for large-volume vector data have mostly focused on map generalization in the field of cartography, but rarely discussed on the selection of geographic features quantitatively. This paper applies information theory for measuring the feature importance of vector maps. A measurement model for the amount of information of vector features is defined based upon the amount of information for dealing with feature selection issues. The measurement model involves geometry factor, spatial distribution factor and thematic attribute factor. Moreover, a real-time transport protocol (RTP)-based progressive transmission method is then presented to improve the transmission of vector data. To clearly demonstrate the essential methodology and key techniques, a prototype for web-based progressive vector transmission is presented, and an experiment of progressive selection and transmission for vector features is conducted. The experimental results indicate that our approach clearly improves the performance and end-user experience of delivering and manipulating large vector data over the Internet.
Zhao, Xiao-Wei; Ma, Zhi-Qiang; Yin, Ming-Hao
2012-05-01
Knowledge of protein-protein interactions (PPIs) plays an important role in constructing protein interaction networks and understanding the general machineries of biological systems. In this study, a new method is proposed to predict PPIs using a comprehensive set of 930 features based only on sequence information, these features measure the interactions between residues a certain distant apart in the protein sequences from different aspects. To achieve better performance, the principal component analysis (PCA) is first employed to obtain an optimized feature subset. Then, the resulting 67-dimensional feature vectors are fed to Support Vector Machine (SVM). Experimental results on Drosophila melanogaster and Helicobater pylori datasets show that our method is very promising to predict PPIs and may at least be a useful supplement tool to existing methods.
IMMAN: free software for information theory-based chemometric analysis.
Urias, Ricardo W Pino; Barigye, Stephen J; Marrero-Ponce, Yovani; García-Jacas, César R; Valdes-Martiní, José R; Perez-Gimenez, Facundo
2015-05-01
The features and theoretical background of a new and free computational program for chemometric analysis denominated IMMAN (acronym for Information theory-based CheMoMetrics ANalysis) are presented. This is multi-platform software developed in the Java programming language, designed with a remarkably user-friendly graphical interface for the computation of a collection of information-theoretic functions adapted for rank-based unsupervised and supervised feature selection tasks. A total of 20 feature selection parameters are presented, with the unsupervised and supervised frameworks represented by 10 approaches in each case. Several information-theoretic parameters traditionally used as molecular descriptors (MDs) are adapted for use as unsupervised rank-based feature selection methods. On the other hand, a generalization scheme for the previously defined differential Shannon's entropy is discussed, as well as the introduction of Jeffreys information measure for supervised feature selection. Moreover, well-known information-theoretic feature selection parameters, such as information gain, gain ratio, and symmetrical uncertainty are incorporated to the IMMAN software ( http://mobiosd-hub.com/imman-soft/ ), following an equal-interval discretization approach. IMMAN offers data pre-processing functionalities, such as missing values processing, dataset partitioning, and browsing. Moreover, single parameter or ensemble (multi-criteria) ranking options are provided. Consequently, this software is suitable for tasks like dimensionality reduction, feature ranking, as well as comparative diversity analysis of data matrices. Simple examples of applications performed with this program are presented. A comparative study between IMMAN and WEKA feature selection tools using the Arcene dataset was performed, demonstrating similar behavior. In addition, it is revealed that the use of IMMAN unsupervised feature selection methods improves the performance of both IMMAN and WEKA supervised algorithms. Graphic representation for Shannon's distribution of MD calculating software.
The Chesapeake: A Boating Guide to Weather. Educational Series Number 25.
ERIC Educational Resources Information Center
Lucy, Jon; And Others
The purpose of this publication is to promote a better understanding of how basic weather features develop on Chesapeake Bay and enable boaters to enjoy the Bay's unique waterways. Sections include: (1) Chesapeake Bay climate; (2) general weather features; (3) seasonal trends; (4) sources of weather information and forecasts; (5) weather service…
African Studies Curriculum Materials for Teachers. Second Edition.
ERIC Educational Resources Information Center
Illinois Univ., Urbana. Center for African Studies.
This handbook features an exhaustive collection of African studies curriculum materials considered most appropriate for teachers. The material is coded for elementary school, middle school, senior high school/adult, and general interest. Material is presented in the following chapters: "General Information" which contains facts sheets on…
NASA Technical Reports Server (NTRS)
Abrams, M.
1982-01-01
Studies of the effects of spatial resolution on extraction of geologic information are woefully lacking but spatial resolution effects can be examined as they influence two general categories: detection of spatial features per se; and the effects of IFOV on the definition of spectral signatures and on general mapping abilities.
... Loss Hearing Loss in Seniors Hearing Aids General Information Types Features Fittings Assistive Listening & Alerting Devices Cochlear Implants Aural Rehabilitation Auditory Processing Disorders (APDs) Common Conditions Dizziness Tinnitus Who Are ...
Friberg, Anders; Schoonderwaldt, Erwin; Hedblad, Anton; Fabiani, Marco; Elowsson, Anders
2014-10-01
The notion of perceptual features is introduced for describing general music properties based on human perception. This is an attempt at rethinking the concept of features, aiming to approach the underlying human perception mechanisms. Instead of using concepts from music theory such as tones, pitches, and chords, a set of nine features describing overall properties of the music was selected. They were chosen from qualitative measures used in psychology studies and motivated from an ecological approach. The perceptual features were rated in two listening experiments using two different data sets. They were modeled both from symbolic and audio data using different sets of computational features. Ratings of emotional expression were predicted using the perceptual features. The results indicate that (1) at least some of the perceptual features are reliable estimates; (2) emotion ratings could be predicted by a small combination of perceptual features with an explained variance from 75% to 93% for the emotional dimensions activity and valence; (3) the perceptual features could only to a limited extent be modeled using existing audio features. Results clearly indicated that a small number of dedicated features were superior to a "brute force" model using a large number of general audio features.
Classification of clinically useful sentences in clinical evidence resources.
Morid, Mohammad Amin; Fiszman, Marcelo; Raja, Kalpana; Jonnalagadda, Siddhartha R; Del Fiol, Guilherme
2016-04-01
Most patient care questions raised by clinicians can be answered by online clinical knowledge resources. However, important barriers still challenge the use of these resources at the point of care. To design and assess a method for extracting clinically useful sentences from synthesized online clinical resources that represent the most clinically useful information for directly answering clinicians' information needs. We developed a Kernel-based Bayesian Network classification model based on different domain-specific feature types extracted from sentences in a gold standard composed of 18 UpToDate documents. These features included UMLS concepts and their semantic groups, semantic predications extracted by SemRep, patient population identified by a pattern-based natural language processing (NLP) algorithm, and cue words extracted by a feature selection technique. Algorithm performance was measured in terms of precision, recall, and F-measure. The feature-rich approach yielded an F-measure of 74% versus 37% for a feature co-occurrence method (p<0.001). Excluding predication, population, semantic concept or text-based features reduced the F-measure to 62%, 66%, 58% and 69% respectively (p<0.01). The classifier applied to Medline sentences reached an F-measure of 73%, which is equivalent to the performance of the classifier on UpToDate sentences (p=0.62). The feature-rich approach significantly outperformed general baseline methods. This approach significantly outperformed classifiers based on a single type of feature. Different types of semantic features provided a unique contribution to overall classification performance. The classifier's model and features used for UpToDate generalized well to Medline abstracts. Copyright © 2016 Elsevier Inc. All rights reserved.
Ordinal measures for iris recognition.
Sun, Zhenan; Tan, Tieniu
2009-12-01
Images of a human iris contain rich texture information useful for identity authentication. A key and still open issue in iris recognition is how best to represent such textural information using a compact set of features (iris features). In this paper, we propose using ordinal measures for iris feature representation with the objective of characterizing qualitative relationships between iris regions rather than precise measurements of iris image structures. Such a representation may lose some image-specific information, but it achieves a good trade-off between distinctiveness and robustness. We show that ordinal measures are intrinsic features of iris patterns and largely invariant to illumination changes. Moreover, compactness and low computational complexity of ordinal measures enable highly efficient iris recognition. Ordinal measures are a general concept useful for image analysis and many variants can be derived for ordinal feature extraction. In this paper, we develop multilobe differential filters to compute ordinal measures with flexible intralobe and interlobe parameters such as location, scale, orientation, and distance. Experimental results on three public iris image databases demonstrate the effectiveness of the proposed ordinal feature models.
Can two dots form a Gestalt? Measuring emergent features with the capacity coefficient.
Hawkins, Robert X D; Houpt, Joseph W; Eidels, Ami; Townsend, James T
2016-09-01
While there is widespread agreement among vision researchers on the importance of some local aspects of visual stimuli, such as hue and intensity, there is no general consensus on a full set of basic sources of information used in perceptual tasks or how they are processed. Gestalt theories place particular value on emergent features, which are based on the higher-order relationships among elements of a stimulus rather than local properties. Thus, arbitrating between different accounts of features is an important step in arbitrating between local and Gestalt theories of perception in general. In this paper, we present the capacity coefficient from Systems Factorial Technology (SFT) as a quantitative approach for formalizing and rigorously testing predictions made by local and Gestalt theories of features. As a simple, easily controlled domain for testing this approach, we focus on the local feature of location and the emergent features of Orientation and Proximity in a pair of dots. We introduce a redundant-target change detection task to compare our capacity measure on (1) trials where the configuration of the dots changed along with their location against (2) trials where the amount of local location change was exactly the same, but there was no change in the configuration. Our results, in conjunction with our modeling tools, favor the Gestalt account of emergent features. We conclude by suggesting several candidate information-processing models that incorporate emergent features, which follow from our approach. Copyright © 2015 Elsevier Ltd. All rights reserved.
Assessment of Homomorphic Analysis for Human Activity Recognition from Acceleration Signals.
Vanrell, Sebastian Rodrigo; Milone, Diego Humberto; Rufiner, Hugo Leonardo
2017-07-03
Unobtrusive activity monitoring can provide valuable information for medical and sports applications. In recent years, human activity recognition has moved to wearable sensors to deal with unconstrained scenarios. Accelerometers are the preferred sensors due to their simplicity and availability. Previous studies have examined several \\azul{classic} techniques for extracting features from acceleration signals, including time-domain, time-frequency, frequency-domain, and other heuristic features. Spectral and temporal features are the preferred ones and they are generally computed from acceleration components, leaving the acceleration magnitude potential unexplored. In this study, based on homomorphic analysis, a new type of feature extraction stage is proposed in order to exploit discriminative activity information present in acceleration signals. Homomorphic analysis can isolate the information about whole body dynamics and translate it into a compact representation, called cepstral coefficients. Experiments have explored several configurations of the proposed features, including size of representation, signals to be used, and fusion with other features. Cepstral features computed from acceleration magnitude obtained one of the highest recognition rates. In addition, a beneficial contribution was found when time-domain and moving pace information was included in the feature vector. Overall, the proposed system achieved a recognition rate of 91.21% on the publicly available SCUT-NAA dataset. To the best of our knowledge, this is the highest recognition rate on this dataset.
Description of the IV + V System Software Package.
ERIC Educational Resources Information Center
Microcomputers for Information Management: An International Journal for Library and Information Services, 1984
1984-01-01
Describes the IV + V System, a software package designed by the Institut fur Maschinelle Dokumentation for the United Nations General Information Programme and UNISIST to support automation of local information and documentation services. Principle program features and functions outlined include input/output, databank, text image, output, and…
Granier, S; Owen, P; Pill, R; Jacobson, L
1998-01-24
To describe the presentation of meningococcal disease in primary care; to explore how general practitioners process clinical and contextual information in children with meningococcal disease; and to describe how this information affects management. Qualitative analysis of semistructured interviews. General practices in South Glamorgan. 26 general practitioners who between January 1994 and December 1996 admitted 31 children (under 16 years of age) in whom meningococcal disease was diagnosed. Categories of clinical rules and techniques used by general practitioners in processing each case. 22 children had rashes; in 16 of them the rashes were non-blanching. When present, a haemorrhagic rash was the most important factor in the doctor's decision to admit a child. 22 children had clinical features not normally expected in children with acute self limiting illnesses--for example, lethargy, poor eye contact, altered mental states, pallor with a high temperature, and an abnormal cry. Contextual information, such as knowledge of parents' consultation patterns and their normal degree of anxiety, played an important part in the management decisions in 15 cases. Use of penicillin was associated with the certainty of diagnosis and the presence and type of haemorrhagic rash. The key clinical feature of meningococcal disease--a haemorrhagic rash--was present in only half of the study children. The general practitioners specifically hunted for the rash in some ill children, but doctors should not be deterred from diagnosing meningococcal disease and starting antibiotic treatment if the child is otherwise well, if the rash has an unusual or scanty distribution, or if the rash is non-haemorrhagic.
Thalamic neuron models encode stimulus information by burst-size modulation
Elijah, Daniel H.; Samengo, Inés; Montemurro, Marcelo A.
2015-01-01
Thalamic neurons have been long assumed to fire in tonic mode during perceptive states, and in burst mode during sleep and unconsciousness. However, recent evidence suggests that bursts may also be relevant in the encoding of sensory information. Here, we explore the neural code of such thalamic bursts. In order to assess whether the burst code is generic or whether it depends on the detailed properties of each bursting neuron, we analyzed two neuron models incorporating different levels of biological detail. One of the models contained no information of the biophysical processes entailed in spike generation, and described neuron activity at a phenomenological level. The second model represented the evolution of the individual ionic conductances involved in spiking and bursting, and required a large number of parameters. We analyzed the models' input selectivity using reverse correlation methods and information theory. We found that n-spike bursts from both models transmit information by modulating their spike count in response to changes to instantaneous input features, such as slope, phase, amplitude, etc. The stimulus feature that is most efficiently encoded by bursts, however, need not coincide with one of such classical features. We therefore searched for the optimal feature among all those that could be expressed as a linear transformation of the time-dependent input current. We found that bursting neurons transmitted 6 times more information about such more general features. The relevant events in the stimulus were located in a time window spanning ~100 ms before and ~20 ms after burst onset. Most importantly, the neural code employed by the simple and the biologically realistic models was largely the same, implying that the simple thalamic neuron model contains the essential ingredients that account for the computational properties of the thalamic burst code. Thus, our results suggest the n-spike burst code is a general property of thalamic neurons. PMID:26441623
Thalamic neuron models encode stimulus information by burst-size modulation.
Elijah, Daniel H; Samengo, Inés; Montemurro, Marcelo A
2015-01-01
Thalamic neurons have been long assumed to fire in tonic mode during perceptive states, and in burst mode during sleep and unconsciousness. However, recent evidence suggests that bursts may also be relevant in the encoding of sensory information. Here, we explore the neural code of such thalamic bursts. In order to assess whether the burst code is generic or whether it depends on the detailed properties of each bursting neuron, we analyzed two neuron models incorporating different levels of biological detail. One of the models contained no information of the biophysical processes entailed in spike generation, and described neuron activity at a phenomenological level. The second model represented the evolution of the individual ionic conductances involved in spiking and bursting, and required a large number of parameters. We analyzed the models' input selectivity using reverse correlation methods and information theory. We found that n-spike bursts from both models transmit information by modulating their spike count in response to changes to instantaneous input features, such as slope, phase, amplitude, etc. The stimulus feature that is most efficiently encoded by bursts, however, need not coincide with one of such classical features. We therefore searched for the optimal feature among all those that could be expressed as a linear transformation of the time-dependent input current. We found that bursting neurons transmitted 6 times more information about such more general features. The relevant events in the stimulus were located in a time window spanning ~100 ms before and ~20 ms after burst onset. Most importantly, the neural code employed by the simple and the biologically realistic models was largely the same, implying that the simple thalamic neuron model contains the essential ingredients that account for the computational properties of the thalamic burst code. Thus, our results suggest the n-spike burst code is a general property of thalamic neurons.
Non-rigid ultrasound image registration using generalized relaxation labeling process
NASA Astrophysics Data System (ADS)
Lee, Jong-Ha; Seong, Yeong Kyeong; Park, MoonHo; Woo, Kyoung-Gu; Ku, Jeonghun; Park, Hee-Jun
2013-03-01
This research proposes a novel non-rigid registration method for ultrasound images. The most predominant anatomical features in medical images are tissue boundaries, which appear as edges. In ultrasound images, however, other features can be identified as well due to the specular reflections that appear as bright lines superimposed on the ideal edge location. In this work, an image's local phase information (via the frequency domain) is used to find the ideal edge location. The generalized relaxation labeling process is then formulated to align the feature points extracted from the ideal edge location. In this work, the original relaxation labeling method was generalized by taking n compatibility coefficient values to improve non-rigid registration performance. This contextual information combined with a relaxation labeling process is used to search for a correspondence. Then the transformation is calculated by the thin plate spline (TPS) model. These two processes are iterated until the optimal correspondence and transformation are found. We have tested our proposed method and the state-of-the-art algorithms with synthetic data and bladder ultrasound images of in vivo human subjects. Experiments show that the proposed method improves registration performance significantly, as compared to other state-of-the-art non-rigid registration algorithms.
Jalem, Randy; Nakayama, Masanobu; Noda, Yusuke; Le, Tam; Takeuchi, Ichiro; Tateyama, Yoshitaka; Yamazaki, Hisatsugu
2018-01-01
Abstract Increasing attention has been paid to materials informatics approaches that promise efficient and fast discovery and optimization of functional inorganic materials. Technical breakthrough is urgently requested to advance this field and efforts have been made in the development of materials descriptors to encode or represent characteristics of crystalline solids, such as chemical composition, crystal structure, electronic structure, etc. We propose a general representation scheme for crystalline solids that lifts restrictions on atom ordering, cell periodicity, and system cell size based on structural descriptors of directly binned Voronoi-tessellation real feature values and atomic/chemical descriptors based on the electronegativity of elements in the crystal. Comparison was made vs. radial distribution function (RDF) feature vector, in terms of predictive accuracy on density functional theory (DFT) material properties: cohesive energy (CE), density (d), electronic band gap (BG), and decomposition energy (Ed). It was confirmed that the proposed feature vector from Voronoi real value binning generally outperforms the RDF-based one for the prediction of aforementioned properties. Together with electronegativity-based features, Voronoi-tessellation features from a given crystal structure that are derived from second-nearest neighbor information contribute significantly towards prediction. PMID:29707064
Jalem, Randy; Nakayama, Masanobu; Noda, Yusuke; Le, Tam; Takeuchi, Ichiro; Tateyama, Yoshitaka; Yamazaki, Hisatsugu
2018-01-01
Increasing attention has been paid to materials informatics approaches that promise efficient and fast discovery and optimization of functional inorganic materials. Technical breakthrough is urgently requested to advance this field and efforts have been made in the development of materials descriptors to encode or represent characteristics of crystalline solids, such as chemical composition, crystal structure, electronic structure, etc. We propose a general representation scheme for crystalline solids that lifts restrictions on atom ordering, cell periodicity, and system cell size based on structural descriptors of directly binned Voronoi-tessellation real feature values and atomic/chemical descriptors based on the electronegativity of elements in the crystal. Comparison was made vs. radial distribution function (RDF) feature vector, in terms of predictive accuracy on density functional theory (DFT) material properties: cohesive energy (CE), density ( d ), electronic band gap (BG), and decomposition energy (Ed). It was confirmed that the proposed feature vector from Voronoi real value binning generally outperforms the RDF-based one for the prediction of aforementioned properties. Together with electronegativity-based features, Voronoi-tessellation features from a given crystal structure that are derived from second-nearest neighbor information contribute significantly towards prediction.
2017-09-01
information is estimated to average 1 hour per response, including the time for reviewing instruction, searching existing data sources, gathering and...maintaining the data needed, and completing and reviewing the collection of information . Send comments regarding this burden estimate or any other aspect...of this collection of information , including suggestions for reducing this burden to Washington headquarters Services, Directorate for Information
Matsukura, Michi; Vecera, Shaun P
2011-02-01
Attention selects objects as well as locations. When attention selects an object's features, observers identify two features from a single object more accurately than two features from two different objects (object-based effect of attention; e.g., Duncan, Journal of Experimental Psychology: General, 113, 501-517, 1984). Several studies have demonstrated that object-based attention can operate at a late visual processing stage that is independent of objects' spatial information (Awh, Dhaliwal, Christensen, & Matsukura, Psychological Science, 12, 329-334, 2001; Matsukura & Vecera, Psychonomic Bulletin & Review, 16, 529-536, 2009; Vecera, Journal of Experimental Psychology: General, 126, 14-18, 1997; Vecera & Farah, Journal of Experimental Psychology: General, 123, 146-160, 1994). In the present study, we asked two questions regarding this late object-based selection mechanism. In Part I, we investigated how observers' foreknowledge of to-be-reported features allows attention to select objects, as opposed to individual features. Using a feature-report task, a significant object-based effect was observed when to-be-reported features were known in advance but not when this advance knowledge was absent. In Part II, we examined what drives attention to select objects rather than individual features in the absence of observers' foreknowledge of to-be-reported features. Results suggested that, when there was no opportunity for observers to direct their attention to objects that possess to-be-reported features at the time of stimulus presentation, these stimuli must retain strong perceptual cues to establish themselves as separate objects.
The rules of information aggregation and emergence of collective intelligent behavior.
Bettencourt, Luís M A
2009-10-01
Information is a peculiar quantity. Unlike matter and energy, which are conserved by the laws of physics, the aggregation of knowledge from many sources can in fact produce more information (synergy) or less (redundancy) than the sum of its parts. This feature can endow groups with problem-solving strategies that are superior to those possible among noninteracting individuals and, in turn, may provide a selection drive toward collective cooperation and coordination. Here we explore the formal properties of information aggregation as a general principle for explaining features of social organization. We quantify information in terms of the general formalism of information theory, which also prescribes the rules of how different pieces of evidence inform the solution of a given problem. We then show how several canonical examples of collective cognition and coordination can be understood through principles of minimization of uncertainty (maximization of predictability) under information pooling over many individuals. We discuss in some detail how collective coordination in swarms, markets, natural language processing, and collaborative filtering may be guided by the optimal aggregation of information in social collectives. We also identify circumstances when these processes fail, leading, for example, to inefficient markets. The contrast to approaches to understand coordination and collaboration via decision and game theory is also briefly discussed. Copyright © 2009 Cognitive Science Society, Inc.
Mapping the Idea: A Notetaking System.
ERIC Educational Resources Information Center
Driskell, Jeanette
"Mapping" is a note-taking technique by which students use visual cues to isolate, emphasize, and group information meaningfully. Features of this technique include: organizing the note page laterally, so that general topics are on the left side and supportive information is on the right side; separating topics by horizontal lines;…
An ensemble method for extracting adverse drug events from social media.
Liu, Jing; Zhao, Songzheng; Zhang, Xiaodi
2016-06-01
Because adverse drug events (ADEs) are a serious health problem and a leading cause of death, it is of vital importance to identify them correctly and in a timely manner. With the development of Web 2.0, social media has become a large data source for information on ADEs. The objective of this study is to develop a relation extraction system that uses natural language processing techniques to effectively distinguish between ADEs and non-ADEs in informal text on social media. We develop a feature-based approach that utilizes various lexical, syntactic, and semantic features. Information-gain-based feature selection is performed to address high-dimensional features. Then, we evaluate the effectiveness of four well-known kernel-based approaches (i.e., subset tree kernel, tree kernel, shortest dependency path kernel, and all-paths graph kernel) and several ensembles that are generated by adopting different combination methods (i.e., majority voting, weighted averaging, and stacked generalization). All of the approaches are tested using three data sets: two health-related discussion forums and one general social networking site (i.e., Twitter). When investigating the contribution of each feature subset, the feature-based approach attains the best area under the receiver operating characteristics curve (AUC) values, which are 78.6%, 72.2%, and 79.2% on the three data sets. When individual methods are used, we attain the best AUC values of 82.1%, 73.2%, and 77.0% using the subset tree kernel, shortest dependency path kernel, and feature-based approach on the three data sets, respectively. When using classifier ensembles, we achieve the best AUC values of 84.5%, 77.3%, and 84.5% on the three data sets, outperforming the baselines. Our experimental results indicate that ADE extraction from social media can benefit from feature selection. With respect to the effectiveness of different feature subsets, lexical features and semantic features can enhance the ADE extraction capability. Kernel-based approaches, which can stay away from the feature sparsity issue, are qualified to address the ADE extraction problem. Combining different individual classifiers using suitable combination methods can further enhance the ADE extraction effectiveness. Copyright © 2016 Elsevier B.V. All rights reserved.
Block 2. Photograph represents general view taken from the north/west ...
Block 2. Photograph represents general view taken from the north/west region of the May D & F Tower. Photograph shows the main public gathering space for Skyline Park and depicts a light feature and an Information sign - Skyline Park, 1500-1800 Arapaho Street, Denver, Denver County, CO
Brielmann, Aenne A; Bülthoff, Isabelle; Armann, Regine
2014-07-01
Race categorization of faces is a fast and automatic process and is known to affect further face processing profoundly and at earliest stages. Whether processing of own- and other-race faces might rely on different facial cues, as indicated by diverging viewing behavior, is much under debate. We therefore aimed to investigate two open questions in our study: (1) Do observers consider information from distinct facial features informative for race categorization or do they prefer to gain global face information by fixating the geometrical center of the face? (2) Does the fixation pattern, or, if facial features are considered relevant, do these features differ between own- and other-race faces? We used eye tracking to test where European observers look when viewing Asian and Caucasian faces in a race categorization task. Importantly, in order to disentangle centrally located fixations from those towards individual facial features, we presented faces in frontal, half-profile and profile views. We found that observers showed no general bias towards looking at the geometrical center of faces, but rather directed their first fixations towards distinct facial features, regardless of face race. However, participants looked at the eyes more often in Caucasian faces than in Asian faces, and there were significantly more fixations to the nose for Asian compared to Caucasian faces. Thus, observers rely on information from distinct facial features rather than facial information gained by centrally fixating the face. To what extent specific features are looked at is determined by the face's race. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.
Man's role in integrated control and information management systems
NASA Technical Reports Server (NTRS)
Nevins, J. L.; Johnson, I. S.
1972-01-01
Display control considerations associated with avionics techniques are discussed. General purpose displays and a prototype interactive display/command design featuring a pushplate CRT overlay for command input are considered.
[A research in speech endpoint detection based on boxes-coupling generalization dimension].
Wang, Zimei; Yang, Cuirong; Wu, Wei; Fan, Yingle
2008-06-01
In this paper, a new calculating method of generalized dimension, based on boxes-coupling principle, is proposed to overcome the edge effects and to improve the capability of the speech endpoint detection which is based on the original calculating method of generalized dimension. This new method has been applied to speech endpoint detection. Firstly, the length of overlapping border was determined, and through calculating the generalized dimension by covering the speech signal with overlapped boxes, three-dimension feature vectors including the box dimension, the information dimension and the correlation dimension were obtained. Secondly, in the light of the relation between feature distance and similarity degree, feature extraction was conducted by use of common distance. Lastly, bi-threshold method was used to classify the speech signals. The results of experiment indicated that, by comparison with the original generalized dimension (OGD) and the spectral entropy (SE) algorithm, the proposed method is more robust and effective for detecting the speech signals which contain different kinds of noise in different signal noise ratio (SNR), especially in low SNR.
Construct Abstraction for Automatic Information Abstraction from Digital Images
2006-05-30
objects and features and the names of objects of objects and features. For example, in Figure 15 the parts of the fish could be named the ‘mouth... fish -1 fish -2 fish -3 tennis shoe tennis racquet...of abstraction and generality. For example, an algorithm might usefully find a polygon ( blob ) in an image and calculate numbers such as the
Dehzangi, Abdollah; Paliwal, Kuldip; Sharma, Alok; Dehzangi, Omid; Sattar, Abdul
2013-01-01
Better understanding of structural class of a given protein reveals important information about its overall folding type and its domain. It can also be directly used to provide critical information on general tertiary structure of a protein which has a profound impact on protein function determination and drug design. Despite tremendous enhancements made by pattern recognition-based approaches to solve this problem, it still remains as an unsolved issue for bioinformatics that demands more attention and exploration. In this study, we propose a novel feature extraction model that incorporates physicochemical and evolutionary-based information simultaneously. We also propose overlapped segmented distribution and autocorrelation-based feature extraction methods to provide more local and global discriminatory information. The proposed feature extraction methods are explored for 15 most promising attributes that are selected from a wide range of physicochemical-based attributes. Finally, by applying an ensemble of different classifiers namely, Adaboost.M1, LogitBoost, naive Bayes, multilayer perceptron (MLP), and support vector machine (SVM) we show enhancement of the protein structural class prediction accuracy for four popular benchmarks.
Feature Mining and Health Assessment for Gearboxes Using Run-Up/Coast-Down Signals
Zhao, Ming; Lin, Jing; Miao, Yonghao; Xu, Xiaoqiang
2016-01-01
Vibration signals measured in the run-up/coast-down (R/C) processes usually carry rich information about the health status of machinery. However, a major challenge in R/C signals analysis lies in how to exploit more diagnostic information, and how this information could be properly integrated to achieve a more reliable maintenance decision. Aiming at this problem, a framework of R/C signals analysis is presented for the health assessment of gearbox. In the proposed methodology, we first investigate the data preprocessing and feature selection issues for R/C signals. Based on that, a sparsity-guided feature enhancement scheme is then proposed to extract the weak phase jitter associated with gear defect. In order for an effective feature mining and integration under R/C, a generalized phase demodulation technique is further established to reveal the evolution of modulation feature with operating speed and rotation angle. The experimental results indicate that the proposed methodology could not only detect the presence of gear damage, but also offer a novel insight into the dynamic behavior of gearbox. PMID:27827831
Feature Mining and Health Assessment for Gearboxes Using Run-Up/Coast-Down Signals.
Zhao, Ming; Lin, Jing; Miao, Yonghao; Xu, Xiaoqiang
2016-11-02
Vibration signals measured in the run-up/coast-down (R/C) processes usually carry rich information about the health status of machinery. However, a major challenge in R/C signals analysis lies in how to exploit more diagnostic information, and how this information could be properly integrated to achieve a more reliable maintenance decision. Aiming at this problem, a framework of R/C signals analysis is presented for the health assessment of gearbox. In the proposed methodology, we first investigate the data preprocessing and feature selection issues for R/C signals. Based on that, a sparsity-guided feature enhancement scheme is then proposed to extract the weak phase jitter associated with gear defect. In order for an effective feature mining and integration under R/C, a generalized phase demodulation technique is further established to reveal the evolution of modulation feature with operating speed and rotation angle. The experimental results indicate that the proposed methodology could not only detect the presence of gear damage, but also offer a novel insight into the dynamic behavior of gearbox.
ERIC Educational Resources Information Center
Marcum, Deanna; Boss, Richard
1982-01-01
Discusses four automated serials control systems which have been installed by at least six general libraries: OCLC's Serials Control Subsystem, Faxon's LINX, Ebsco's EBSCONET, and CLASS' CHECKMATE. Features of each system, accessibility, and costs are noted. (EJS)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rintoul, Mark Daniel; Wilson, Andrew T.; Valicka, Christopher G.
We want to organize a body of trajectories in order to identify, search for, classify and predict behavior among objects such as aircraft and ships. Existing compari- son functions such as the Fr'echet distance are computationally expensive and yield counterintuitive results in some cases. We propose an approach using feature vectors whose components represent succinctly the salient information in trajectories. These features incorporate basic information such as total distance traveled and distance be- tween start/stop points as well as geometric features related to the properties of the convex hull, trajectory curvature and general distance geometry. Additionally, these features can generallymore » be mapped easily to behaviors of interest to humans that are searching large databases. Most of these geometric features are invariant under rigid transformation. We demonstrate the use of different subsets of these features to iden- tify trajectories similar to an exemplar, cluster a database of several hundred thousand trajectories, predict destination and apply unsupervised machine learning algorithms.« less
The Relationship in Biology between the Nature of Science and Scientific Inquiry
ERIC Educational Resources Information Center
Kremer, Kerstin; Specht, Christiane; Urhahne, Detlef; Mayer, Jürgen
2014-01-01
Informed understandings of nature of science and scientific inquiry are generally accepted goals of biology education. This article points out central features of scientific inquiry with relation to biology and the nature of science in general terms and focuses on the relationship of students' inquiry skills in biology and their beliefs on the…
Kumar, Shiu; Sharma, Alok; Tsunoda, Tatsuhiko
2017-12-28
Common spatial pattern (CSP) has been an effective technique for feature extraction in electroencephalography (EEG) based brain computer interfaces (BCIs). However, motor imagery EEG signal feature extraction using CSP generally depends on the selection of the frequency bands to a great extent. In this study, we propose a mutual information based frequency band selection approach. The idea of the proposed method is to utilize the information from all the available channels for effectively selecting the most discriminative filter banks. CSP features are extracted from multiple overlapping sub-bands. An additional sub-band has been introduced that cover the wide frequency band (7-30 Hz) and two different types of features are extracted using CSP and common spatio-spectral pattern techniques, respectively. Mutual information is then computed from the extracted features of each of these bands and the top filter banks are selected for further processing. Linear discriminant analysis is applied to the features extracted from each of the filter banks. The scores are fused together, and classification is done using support vector machine. The proposed method is evaluated using BCI Competition III dataset IVa, BCI Competition IV dataset I and BCI Competition IV dataset IIb, and it outperformed all other competing methods achieving the lowest misclassification rate and the highest kappa coefficient on all three datasets. Introducing a wide sub-band and using mutual information for selecting the most discriminative sub-bands, the proposed method shows improvement in motor imagery EEG signal classification.
Evaluation of features to support safety and quality in general practice clinical software
2011-01-01
Background Electronic prescribing is now the norm in many countries. We wished to find out if clinical software systems used by general practitioners in Australia include features (functional capabilities and other characteristics) that facilitate improved patient safety and care, with a focus on quality use of medicines. Methods Seven clinical software systems used in general practice were evaluated. Fifty software features that were previously rated as likely to have a high impact on safety and/or quality of care in general practice were tested and are reported here. Results The range of results for the implementation of 50 features across the 7 clinical software systems was as follows: 17-31 features (34-62%) were fully implemented, 9-13 (18-26%) partially implemented, and 9-20 (18-40%) not implemented. Key findings included: Access to evidence based drug and therapeutic information was limited. Decision support for prescribing was available but varied markedly between systems. During prescribing there was potential for medicine mis-selection in some systems, and linking a medicine with its indication was optional. The definition of 'current medicines' versus 'past medicines' was not always clear. There were limited resources for patients, and some medicines lists for patients were suboptimal. Results were provided to the software vendors, who were keen to improve their systems. Conclusions The clinical systems tested lack some of the features expected to support patient safety and quality of care. Standards and certification for clinical software would ensure that safety features are present and that there is a minimum level of clinical functionality that clinicians could expect to find in any system.
Generic emergence of classical features in quantum Darwinism.
Brandão, Fernando G S L; Piani, Marco; Horodecki, Paweł
2015-08-12
Quantum Darwinism posits that only specific information about a quantum system that is redundantly proliferated to many parts of its environment becomes accessible and objective, leading to the emergence of classical reality. However, it is not clear under what conditions this mechanism holds true. Here we prove that the emergence of classical features along the lines of quantum Darwinism is a general feature of any quantum dynamics: observers who acquire information indirectly through the environment have effective access at most to classical information about one and the same measurement of the quantum system. Our analysis does not rely on a strict conceptual splitting between a system-of-interest and its environment, and allows one to interpret any system as part of the environment of any other system. Finally, our approach leads to a full operational characterization of quantum discord in terms of local redistribution of correlations.
Generic emergence of classical features in quantum Darwinism
NASA Astrophysics Data System (ADS)
Brandão, Fernando G. S. L.; Piani, Marco; Horodecki, Paweł
2015-08-01
Quantum Darwinism posits that only specific information about a quantum system that is redundantly proliferated to many parts of its environment becomes accessible and objective, leading to the emergence of classical reality. However, it is not clear under what conditions this mechanism holds true. Here we prove that the emergence of classical features along the lines of quantum Darwinism is a general feature of any quantum dynamics: observers who acquire information indirectly through the environment have effective access at most to classical information about one and the same measurement of the quantum system. Our analysis does not rely on a strict conceptual splitting between a system-of-interest and its environment, and allows one to interpret any system as part of the environment of any other system. Finally, our approach leads to a full operational characterization of quantum discord in terms of local redistribution of correlations.
e-IQ and IQ knowledge mining for generalized LDA
NASA Astrophysics Data System (ADS)
Jenkins, Jeffrey; van Bergem, Rutger; Sweet, Charles; Vietsch, Eveline; Szu, Harold
2015-05-01
How can the human brain uncover patterns, associations and features in real-time, real-world data? There must be a general strategy used to transform raw signals into useful features, but representing this generalization in the context of our information extraction tool set is lacking. In contrast to Big Data (BD), Large Data Analysis (LDA) has become a reachable multi-disciplinary goal in recent years due in part to high performance computers and algorithm development, as well as the availability of large data sets. However, the experience of Machine Learning (ML) and information communities has not been generalized into an intuitive framework that is useful to researchers across disciplines. The data exploration phase of data mining is a prime example of this unspoken, ad-hoc nature of ML - the Computer Scientist works with a Subject Matter Expert (SME) to understand the data, and then build tools (i.e. classifiers, etc.) which can benefit the SME and the rest of the researchers in that field. We ask, why is there not a tool to represent information in a meaningful way to the researcher asking the question? Meaning is subjective and contextual across disciplines, so to ensure robustness, we draw examples from several disciplines and propose a generalized LDA framework for independent data understanding of heterogeneous sources which contribute to Knowledge Discovery in Databases (KDD). Then, we explore the concept of adaptive Information resolution through a 6W unsupervised learning methodology feedback system. In this paper, we will describe the general process of man-machine interaction in terms of an asymmetric directed graph theory (digging for embedded knowledge), and model the inverse machine-man feedback (digging for tacit knowledge) as an ANN unsupervised learning methodology. Finally, we propose a collective learning framework which utilizes a 6W semantic topology to organize heterogeneous knowledge and diffuse information to entities within a society in a personalized way.
Informations in Models of Evolutionary Dynamics
NASA Astrophysics Data System (ADS)
Rivoire, Olivier
2016-03-01
Biological organisms adapt to changes by processing informations from different sources, most notably from their ancestors and from their environment. We review an approach to quantify these informations by analyzing mathematical models of evolutionary dynamics and show how explicit results are obtained for a solvable subclass of these models. In several limits, the results coincide with those obtained in studies of information processing for communication, gambling or thermodynamics. In the most general case, however, information processing by biological populations shows unique features that motivate the analysis of specific models.
Incremental learning of tasks from user demonstrations, past experiences, and vocal comments.
Pardowitz, Michael; Knoop, Steffen; Dillmann, Ruediger; Zöllner, Raoul D
2007-04-01
Since many years the robotics community is envisioning robot assistants sharing the same environment with humans. It became obvious that they have to interact with humans and should adapt to individual user needs. Especially the high variety of tasks robot assistants will be facing requires a highly adaptive and user-friendly programming interface. One possible solution to this programming problem is the learning-by-demonstration paradigm, where the robot is supposed to observe the execution of a task, acquire task knowledge, and reproduce it. In this paper, a system to record, interpret, and reason over demonstrations of household tasks is presented. The focus is on the model-based representation of manipulation tasks, which serves as a basis for incremental reasoning over the acquired task knowledge. The aim of the reasoning is to condense and interconnect the data, resulting in more general task knowledge. A measure for the assessment of information content of task features is introduced. This measure for the relevance of certain features relies both on general background knowledge as well as task-specific knowledge gathered from the user demonstrations. Beside the autonomous information estimation of features, speech comments during the execution, pointing out the relevance of features are considered as well. The results of the incremental growth of the task knowledge when more task demonstrations become available and their fusion with relevance information gained from speech comments is demonstrated within the task of laying a table.
National Profiles in Technical and Vocational Education in Asia and the Pacific: Fiji.
ERIC Educational Resources Information Center
United Nations Educational, Scientific and Cultural Organization, Bangkok (Thailand). Principal Regional Office for Asia and the Pacific.
This technical and vocational education (TVE) profile on Fiji is one in a series of profiles of UNESCO member countries. It is intended to be a handy reference on TVE systems, staff development, technical cooperation, and information networking. Part I, General Information, covers the following: location, area, and physical features; economic and…
Use of fuzzy sets in modeling of GIS objects
NASA Astrophysics Data System (ADS)
Mironova, Yu N.
2018-05-01
The paper discusses modeling and methods of data visualization in geographic information systems. Information processing in Geoinformatics is based on the use of models. Therefore, geoinformation modeling is a key in the chain of GEODATA processing. When solving problems, using geographic information systems often requires submission of the approximate or insufficient reliable information about the map features in the GIS database. Heterogeneous data of different origin and accuracy have some degree of uncertainty. In addition, not all information is accurate: already during the initial measurements, poorly defined terms and attributes (e.g., "soil, well-drained") are used. Therefore, there are necessary methods for working with uncertain requirements, classes, boundaries. The author proposes using spatial information fuzzy sets. In terms of a characteristic function, a fuzzy set is a natural generalization of ordinary sets, when one rejects the binary nature of this feature and assumes that it can take any value in the interval.
Williams, Pamela A; O'Donoghue, Amie C; Sullivan, Helen W; Willoughby, Jessica Fitts; Squire, Claudia; Parvanta, Sarah; Betts, Kevin R
2016-04-01
Drug efficacy can be measured by composite scores, which consist of two or more symptoms or other clinical components of a disease. We evaluated how individuals interpret composite scores in direct-to-consumer (DTC) prescription drug advertising. We conducted an experimental study of seasonal allergy sufferers (n=1967) who viewed a fictitious print DTC ad that varied by the type of information featured (general indication, list of symptoms, or definition of composite scores) and the presence or absence of an educational intervention about composite scores. We measured composite score recognition and comprehension, and perceived drug efficacy and risk. Ads that featured either (1) the composite score definition alone or (2) the list of symptoms or general indication information along with the educational intervention improved composite score comprehension. Ads that included the composite score definition or the educational intervention led to lower confidence in the drug's benefits. The composite score definition improved composite score recognition and lowered drug risk perceptions. Adding composite score information to DTC print ads may improve individuals' comprehension of composite scores and affect their perceptions of the drug. Providing composite score information may lead to more informed patient-provider prescription drug decisions. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Commercial products that convey personal health information in emergencies.
Potini, Vishnu C; Weerasuriya, Dilani N; Lowery-North, Douglas W; Kellermann, Arthur L
2011-12-01
Describe commercially available products and services designed to convey personal health information in emergencies. The search engine Google®, supplemented by print ads, was used to identify companies and organizations that offer relevant products and services to the general market. Disease-specific, health system, and health plan-specific offerings were excluded. Vendor web sites were the primary sources of information, supplemented by telephone and e-mail queries to sales representatives. Perfect inter-rater agreement was achieved. Thirty-nine unique vendors were identified. Eight sell engraved jewelry. Three offer an embossed card or pamphlet. Twelve supply USB drives with various features. Eleven support password-protected web sites. Five maintain national call centers. Available media differed markedly with respect to capacity and accessibility. Quoted prices ranged from a one-time expenditure of $3.50 to an annual fee of $200. Associated features and annual fees varied widely. A wide range of products and services exist to help patients convey personal health information. Health care providers should be familiar with their features, so they can access the information in a disaster or emergency.
Science Syllabus for Middle and Junior High Schools. Block D, The Earth's Changing Surface.
ERIC Educational Resources Information Center
New York State Education Dept., Albany. Bureau of General Education Curriculum Development.
This syllabus begins with a list of program objectives and performance criteria for the study of three general topic areas in earth science and a list of 22 science processes. Following this information is a listing of concepts and understandings for subtopics within the general topic areas: (1) the earth's surface--surface features, rock…
Li, Yanpeng; Hu, Xiaohua; Lin, Hongfei; Yang, Zhihao
2011-01-01
Feature representation is essential to machine learning and text mining. In this paper, we present a feature coupling generalization (FCG) framework for generating new features from unlabeled data. It selects two special types of features, i.e., example-distinguishing features (EDFs) and class-distinguishing features (CDFs) from original feature set, and then generalizes EDFs into higher-level features based on their coupling degrees with CDFs in unlabeled data. The advantage is: EDFs with extreme sparsity in labeled data can be enriched by their co-occurrences with CDFs in unlabeled data so that the performance of these low-frequency features can be greatly boosted and new information from unlabeled can be incorporated. We apply this approach to three tasks in biomedical literature mining: gene named entity recognition (NER), protein-protein interaction extraction (PPIE), and text classification (TC) for gene ontology (GO) annotation. New features are generated from over 20 GB unlabeled PubMed abstracts. The experimental results on BioCreative 2, AIMED corpus, and TREC 2005 Genomics Track show that 1) FCG can utilize well the sparse features ignored by supervised learning. 2) It improves the performance of supervised baselines by 7.8 percent, 5.0 percent, and 5.8 percent, respectively, in the tree tasks. 3) Our methods achieve 89.1, 64.5 F-score, and 60.1 normalized utility on the three benchmark data sets.
Chemical environments of submarine hydrothermal systems. [supporting abiogenetic theory
NASA Technical Reports Server (NTRS)
Shock, Everett L.
1992-01-01
The paper synthesizes diverse information about the inorganic geochemistry of submarine hydrothermal systems, provides a description of the fundamental physical and chemical properties of these systems, and examines the implications of high-temperature, fluid-driven processes for organic synthesis. Emphasis is on a few general features, i.e., pressure, temperature, oxidation states, fluid composition, and mineral alteration, because these features will control whether organic synthesis can occur in hydrothermal systems.
GP preferences for information systems: conjoint analysis of speed, reliability, access and users.
Wyatt, Jeremy C; Batley, Richard P; Keen, Justin
2010-10-01
To elicit the preferences and trade-offs of UK general practitioners about key features of health information systems, to help inform the design of such systems in future. A stated choice study to uncover implicit preferences based on a binary choice between scenarios presented in random order. were all 303 general practice members of the UK Internet service provider, Medix who were approached by email to participate. The main outcome measure was the number of seconds delay in system response that general practitioners were willing to trade off for each key system feature: the reliability of the system, the sites from which the system could be accessed and which staff are able to view patient data. Doctors valued speed of response most in information systems but would be prepared to wait 28 seconds to access a system in exchange for improved reliability from 95% to 99%, a further 2 seconds for an improvement to 99.9% and 27 seconds for access to data from anywhere including their own home compared with one place in a single health care premises. However, they would require a system that was 14 seconds faster to compensate for allowing social care as well as National Health Service staff to read patient data. These results provide important new evidence about which system characteristics doctors value highly, and hence which characteristics designers need to focus on when large scale health information systems are planned. © 2010 Blackwell Publishing Ltd.
Kim, Hae Young; Park, Ji Hoon; Lee, Yoon Jin; Lee, Sung Soo; Jeon, Jong-June; Lee, Kyoung Ho
2018-04-01
Purpose To perform a systematic review and meta-analysis to identify computed tomographic (CT) features for differentiating complicated appendicitis in patients suspected of having appendicitis and to summarize their diagnostic accuracy. Materials and Methods Studies on diagnostic accuracy of CT features for differentiating complicated appendicitis (perforated or gangrenous appendicitis) in patients suspected of having appendicitis were searched in Ovid-MEDLINE, EMBASE, and the Cochrane Library. Overlapping descriptors used in different studies to denote the same image finding were subsumed under a single CT feature. Pooled diagnostic accuracy of the CT features was calculated by using a bivariate random effects model. CT features with pooled diagnostic odds ratios with 95% confidence intervals not including 1 were considered as informative. Results Twenty-three studies were included, and 184 overlapping descriptors for various CT findings were subsumed under 14 features. Of these, 10 features were informative for complicated appendicitis. There was a general tendency for these features to show relatively high specificity but low sensitivity. Extraluminal appendicolith, abscess, appendiceal wall enhancement defect, extraluminal air, ileus, periappendiceal fluid collection, ascites, intraluminal air, and intraluminal appendicolith showed pooled specificity greater than 70% (range, 74%-100%), but sensitivity was limited (range, 14%-59%). Periappendiceal fat stranding was the only feature that showed high sensitivity (94%; 95% confidence interval: 86%, 98%) but low specificity (40%; 95% confidence interval, 23%, 60%). Conclusion Ten informative CT features for differentiating complicated appendicitis were identified in this study, nine of which showed high specificity, but low sensitivity. © RSNA, 2017 Online supplemental material is available for this article.
Brooks, Kevin R; Kemp, Richard I
2007-01-01
Previous studies of face recognition and of face matching have shown a general improvement for the processing of internal features as a face becomes more familiar to the participant. In this study, we used a psychophysical two-alternative forced-choice paradigm to investigate thresholds for the detection of a displacement of the eyes, nose, mouth, or ears for familiar and unfamiliar faces. No clear division between internal and external features was observed. Rather, for familiar (compared to unfamiliar) faces participants were more sensitive to displacements of internal features such as the eyes or the nose; yet, for our third internal feature-the mouth no such difference was observed. Despite large displacements, many subjects were unable to perform above chance when stimuli involved shifts in the position of the ears. These results are consistent with the proposal that familiarity effects may be mediated by the construction of a robust representation of a face, although the involvement of attention in the encoding of face stimuli cannot be ruled out. Furthermore, these effects are mediated by information from a spatial configuration of features, rather than by purely feature-based information.
CROI 2018: Advances in Basic Science Understanding of HIV.
Stevenson, Mario
2018-05-01
The conference on Retroviruses and Opportunistic Infections represents the most important venue for the dissemination of research advances in HIV and AIDS. The 25th conference, held in Boston, featured presentations that provided insight into the mechanisms of HIV-1 spread in tissues as well as new information on mechanisms of HIV-1 persistence in individuals on effective antiretroviral treatment. The ability of the conference to convey research findings for a general audience is enhanced, to a large part, by preconference workshops. These workshops feature leading researchers who aim to present cutting edge research to a general audience. These sessions rank highly in terms of education and professional value.
Mutual information and spontaneous symmetry breaking
NASA Astrophysics Data System (ADS)
Hamma, A.; Giampaolo, S. M.; Illuminati, F.
2016-01-01
We show that the metastable, symmetry-breaking ground states of quantum many-body Hamiltonians have vanishing quantum mutual information between macroscopically separated regions and are thus the most classical ones among all possible quantum ground states. This statement is obvious only when the symmetry-breaking ground states are simple product states, e.g., at the factorization point. On the other hand, symmetry-breaking states are in general entangled along the entire ordered phase, and to show that they actually feature the least macroscopic correlations compared to their symmetric superpositions is highly nontrivial. We prove this result in general, by considering the quantum mutual information based on the two-Rényi entanglement entropy and using a locality result stemming from quasiadiabatic continuation. Moreover, in the paradigmatic case of the exactly solvable one-dimensional quantum X Y model, we further verify the general result by considering also the quantum mutual information based on the von Neumann entanglement entropy.
Discriminative Multi-View Interactive Image Re-Ranking.
Li, Jun; Xu, Chang; Yang, Wankou; Sun, Changyin; Tao, Dacheng
2017-07-01
Given an unreliable visual patterns and insufficient query information, content-based image retrieval is often suboptimal and requires image re-ranking using auxiliary information. In this paper, we propose a discriminative multi-view interactive image re-ranking (DMINTIR), which integrates user relevance feedback capturing users' intentions and multiple features that sufficiently describe the images. In DMINTIR, heterogeneous property features are incorporated in the multi-view learning scheme to exploit their complementarities. In addition, a discriminatively learned weight vector is obtained to reassign updated scores and target images for re-ranking. Compared with other multi-view learning techniques, our scheme not only generates a compact representation in the latent space from the redundant multi-view features but also maximally preserves the discriminative information in feature encoding by the large-margin principle. Furthermore, the generalization error bound of the proposed algorithm is theoretically analyzed and shown to be improved by the interactions between the latent space and discriminant function learning. Experimental results on two benchmark data sets demonstrate that our approach boosts baseline retrieval quality and is competitive with the other state-of-the-art re-ranking strategies.
Nuclear safety, Volume 38, Number 1, January--March 1997
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
1997-03-01
This journal contains nine articles which fall under the following categories: (1) general safety considerations; (2) control and instrumentation; (3) design features (4) environmental effects; (5) US Nuclear Regulatory Commission information and analyses; and (6) recent developments.
Dawn Mission to Vesta and Ceres Lithograph
2007-01-01
This artist's lithograph features general information, significant dates, and interesting facts on the backabout asteroid Vesta and dwarf planet Ceres and is part of the Mission Art series from NASA's Dawn mission. http://photojournal.jpl.nasa.gov/catalog/PIA19370
Machine learning-based diagnosis of melanoma using macro images.
Gautam, Diwakar; Ahmed, Mushtaq; Meena, Yogesh Kumar; Ul Haq, Ahtesham
2018-05-01
Cancer bears a poisoning threat to human society. Melanoma, the skin cancer, originates from skin layers and penetrates deep into subcutaneous layers. There exists an extensive research in melanoma diagnosis using dermatoscopic images captured through a dermatoscope. While designing a diagnostic model for general handheld imaging systems is an emerging trend, this article proposes a computer-aided decision support system for macro images captured by a general-purpose camera. General imaging conditions are adversely affected by nonuniform illumination, which further affects the extraction of relevant information. To mitigate it, we process an image to define a smooth illumination surface using the multistage illumination compensation approach, and the infected region is extracted using the proposed multimode segmentation method. The lesion information is numerated as a feature set comprising geometry, photometry, border series, and texture measures. The redundancy in feature set is reduced using information theory methods, and a classification boundary is modeled to distinguish benign and malignant samples using support vector machine, random forest, neural network, and fast discriminative mixed-membership-based naive Bayesian classifiers. Moreover, the experimental outcome is supported by hypothesis testing and boxplot representation for classification losses. The simulation results prove the significance of the proposed model that shows an improved performance as compared with competing arts. Copyright © 2017 John Wiley & Sons, Ltd.
Self-organizing neural integration of pose-motion features for human action recognition
Parisi, German I.; Weber, Cornelius; Wermter, Stefan
2015-01-01
The visual recognition of complex, articulated human movements is fundamental for a wide range of artificial systems oriented toward human-robot communication, action classification, and action-driven perception. These challenging tasks may generally involve the processing of a huge amount of visual information and learning-based mechanisms for generalizing a set of training actions and classifying new samples. To operate in natural environments, a crucial property is the efficient and robust recognition of actions, also under noisy conditions caused by, for instance, systematic sensor errors and temporarily occluded persons. Studies of the mammalian visual system and its outperforming ability to process biological motion information suggest separate neural pathways for the distinct processing of pose and motion features at multiple levels and the subsequent integration of these visual cues for action perception. We present a neurobiologically-motivated approach to achieve noise-tolerant action recognition in real time. Our model consists of self-organizing Growing When Required (GWR) networks that obtain progressively generalized representations of sensory inputs and learn inherent spatio-temporal dependencies. During the training, the GWR networks dynamically change their topological structure to better match the input space. We first extract pose and motion features from video sequences and then cluster actions in terms of prototypical pose-motion trajectories. Multi-cue trajectories from matching action frames are subsequently combined to provide action dynamics in the joint feature space. Reported experiments show that our approach outperforms previous results on a dataset of full-body actions captured with a depth sensor, and ranks among the best results for a public benchmark of domestic daily actions. PMID:26106323
Toward a clarification of the taxonomy of "bias" in epidemiology textbooks.
Schwartz, Sharon; Campbell, Ulka B; Gatto, Nicolle M; Gordon, Kirsha
2015-03-01
Epidemiology textbooks typically divide biases into 3 general categories-confounding, selection bias, and information bias. Despite the ubiquity of this categorization, authors often use these terms to mean different things. This hinders communication among epidemiologists and confuses students who are just learning about the field. To understand the sources of this problem, we reviewed current general epidemiology textbooks to examine how the authors defined and categorized biases. We found that much of the confusion arises from different definitions of "validity" and from a mixing of 3 overlapping organizational features in defining and differentiating among confounding, selection bias, and information bias: consequence, the result of the problem; cause, the processes that give rise to the problem; and cure, how these biases can be addressed once they occur. By contrast, a consistent taxonomy would provide (1) a clear and consistent definition of what unites confounding, selection bias, and information bias and (2) a clear articulation and consistent application of the feature that distinguishes these categories. Based on a distillation of these textbook discussions, we provide an example of a taxonomy that we think meets these criteria.
Computer-assisted engineering data base
NASA Technical Reports Server (NTRS)
Dube, R. P.; Johnson, H. R.
1983-01-01
General capabilities of data base management technology are described. Information requirements posed by the space station life cycle are discussed, and it is asserted that data base management technology supporting engineering/manufacturing in a heterogeneous hardware/data base management system environment should be applied to meeting these requirements. Today's commercial systems do not satisfy all of these requirements. The features of an R&D data base management system being developed to investigate data base management in the engineering/manufacturing environment are discussed. Features of this system represent only a partial solution to space station requirements. Areas where this system should be extended to meet full space station information management requirements are discussed.
Content-based image retrieval by matching hierarchical attributed region adjacency graphs
NASA Astrophysics Data System (ADS)
Fischer, Benedikt; Thies, Christian J.; Guld, Mark O.; Lehmann, Thomas M.
2004-05-01
Content-based image retrieval requires a formal description of visual information. In medical applications, all relevant biological objects have to be represented by this description. Although color as the primary feature has proven successful in publicly available retrieval systems of general purpose, this description is not applicable to most medical images. Additionally, it has been shown that global features characterizing the whole image do not lead to acceptable results in the medical context or that they are only suitable for specific applications. For a general purpose content-based comparison of medical images, local, i.e. regional features that are collected on multiple scales must be used. A hierarchical attributed region adjacency graph (HARAG) provides such a representation and transfers image comparison to graph matching. However, building a HARAG from an image requires a restriction in size to be computationally feasible while at the same time all visually plausible information must be preserved. For this purpose, mechanisms for the reduction of the graph size are presented. Even with a reduced graph, the problem of graph matching remains NP-complete. In this paper, the Similarity Flooding approach and Hopfield-style neural networks are adapted from the graph matching community to the needs of HARAG comparison. Based on synthetic image material build from simple geometric objects, all visually similar regions were matched accordingly showing the framework's general applicability to content-based image retrieval of medical images.
Applications of the generalized information processing system (GIPSY)
Moody, D.W.; Kays, Olaf
1972-01-01
The Generalized Information Processing System (GIPSY) stores and retrieves variable-field, variable-length records consisting of numeric data, textual data, or codes. A particularly noteworthy feature of GIPSY is its ability to search records for words, word stems, prefixes, and suffixes as well as for numeric values. Moreover, retrieved records may be printed on pre-defined formats or formatted as fixed-field, fixed-length records for direct input to other-programs, which facilitates the exchange of data with other systems. At present there are some 22 applications of GIPSY falling in the general areas of bibliography, natural resources information, and management science, This report presents a description of each application including a sample input form, dictionary, and a typical formatted record. It is hoped that these examples will stimulate others to experiment with innovative uses of computer technology.
The Role of the Popular Article in Astronomy Communication
NASA Astrophysics Data System (ADS)
Mahoney, T. J.
2005-12-01
Over recent decades there has been a proliferation of special-interest magazines dedicated to astronomy. In spite of the undoubted market for specialist feature articles on astronomy such articles appeal to a restricted sector of the general public and rarely appear in the daily or weekly press. I argue here that, apart from television documentary programmes and series, the general public's main exposure to astronomy- related stories is in the form of news reports, which carry too much information in too condensed a form for the general reader or viewer to absorb. I propose that, apart from education, trade books and documentaries, the only way to engage the serious interest of the public in astronomy is through feature articles published in wide-circulation newspapers and magazines. I further propose a generalized model for science communication and distinguish between outreach (to the general public), midreach (to astrobuffs) and inreach (the raising of awareness of the importance of outreach among the research community). Much of what is currently called outreach falls under midreach.
NASA Astrophysics Data System (ADS)
Taşkin Kaya, Gülşen
2013-10-01
Recently, earthquake damage assessment using satellite images has been a very popular ongoing research direction. Especially with the availability of very high resolution (VHR) satellite images, a quite detailed damage map based on building scale has been produced, and various studies have also been conducted in the literature. As the spatial resolution of satellite images increases, distinguishability of damage patterns becomes more cruel especially in case of using only the spectral information during classification. In order to overcome this difficulty, textural information needs to be involved to the classification to improve the visual quality and reliability of damage map. There are many kinds of textural information which can be derived from VHR satellite images depending on the algorithm used. However, extraction of textural information and evaluation of them have been generally a time consuming process especially for the large areas affected from the earthquake due to the size of VHR image. Therefore, in order to provide a quick damage map, the most useful features describing damage patterns needs to be known in advance as well as the redundant features. In this study, a very high resolution satellite image after Iran, Bam earthquake was used to identify the earthquake damage. Not only the spectral information, textural information was also used during the classification. For textural information, second order Haralick features were extracted from the panchromatic image for the area of interest using gray level co-occurrence matrix with different size of windows and directions. In addition to using spatial features in classification, the most useful features representing the damage characteristic were selected with a novel feature selection method based on high dimensional model representation (HDMR) giving sensitivity of each feature during classification. The method called HDMR was recently proposed as an efficient tool to capture the input-output relationships in high-dimensional systems for many problems in science and engineering. The HDMR method is developed to improve the efficiency of the deducing high dimensional behaviors. The method is formed by a particular organization of low dimensional component functions, in which each function is the contribution of one or more input variables to the output variables.
Olivares, Ela I; Saavedra, Cristina; Trujillo-Barreto, Nelson J; Iglesias, Jaime
2013-01-01
In face processing tasks, prior presentation of internal facial features, when compared with external ones, facilitates the recognition of subsequently displayed familiar faces. In a previous ERP study (Olivares & Iglesias, 2010) we found a visibly larger N400-like effect when identity mismatch familiar faces were preceded by internal features, as compared to prior presentation of external ones. In the present study we contrasted the processing of familiar and unfamiliar faces in the face-feature matching task to assess whether the so-called "internal features advantage" relies mainly on the use of stored face-identity-related information or if it might operate independently from stimulus familiarity. Our participants (N = 24) achieved better performance with internal features as primes and, significantly, with familiar faces. Importantly, ERPs elicited by identity mismatch complete faces displayed a negativity around 300-600 msec which was clearly enhanced for familiar faces primed by internal features when compared with the other experimental conditions. Source reconstruction showed incremented activity elicited by familiar stimuli in both posterior (ventral occipitotemporal) and more anterior (parahippocampal (ParaHIP) and orbitofrontal) brain regions. The activity elicited by unfamiliar stimuli was, in general, located in more posterior regions. Our findings suggest that the activation of multiple neural codes is required for optimal individuation in face-feature matching and that a cortical network related to long-term information for face-identity processing seems to support the internal feature effect. Copyright © 2013 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Diegert, C.; Sanders, J.A.; Orrison, W.W. Jr.
1992-12-31
Researchers working with MR observations generally agree that far more information is available in a volume (3D) observation than is considered for diagnosis. The key to the new alignment method is in basing it on available information on surfaces. Using the skin surface is effective a robust algorithm can reliably extract this surface from almost any scan of the head, and a human operator`s exquisite sensitivity to facial features is allows him to manually align skin surfaces with precision. Following the definitions, we report on a preliminary experiment where we align three MR observations taken during a single MR examination,more » each weighting arterial, venous, and tissue features. When accurately aligned, a neurosurgeon can use these features as anatomical landmarks for planning and executing interventional procedures.« less
Maximum likelihood clustering with dependent feature trees
NASA Technical Reports Server (NTRS)
Chittineni, C. B. (Principal Investigator)
1981-01-01
The decomposition of mixture density of the data into its normal component densities is considered. The densities are approximated with first order dependent feature trees using criteria of mutual information and distance measures. Expressions are presented for the criteria when the densities are Gaussian. By defining different typs of nodes in a general dependent feature tree, maximum likelihood equations are developed for the estimation of parameters using fixed point iterations. The field structure of the data is also taken into account in developing maximum likelihood equations. Experimental results from the processing of remotely sensed multispectral scanner imagery data are included.
Information Concerning Preparation of Specifications for Carpeting.
ERIC Educational Resources Information Center
Gilliland, John W.
This paper argues for detailed, written carpeting specifications to assure that schools obtain quality products at competitive prices. The advantages of and specifications for school carpeting are given. A sample written specification contains items on: scope, general features, materials, acoustic characteristics, identification and acoustic…
Imagine the Universe!. Version 2
NASA Technical Reports Server (NTRS)
Whitlock, Laura A.; Bene, Meredith; Cliffe, J. Allie; Lochner, James C.
1998-01-01
Imagine the Universe! gives students, teachers, and the general public a window on how high-energy astrophysics is used to probe the structure and evolution of the Universe. This is the universe as revealed by X-rays, gamma-rays and cosmic rays. Information about this exciting branch of astronomy is available in Imagine the Universe! at a variety of reading levels, and is illustrated with on-line graphics, animations, and movies. Information is presented on topics ranging from the Sun to black holes to X-ray and gamma-ray satellites. Imagine! also features a Teacher's Corner with study guides, lesson plans, and information on other education resources. Further descriptions of features of the Imagine! site and the other sites included on the CD-ROM may be found in sections V and VI of the booklet file.
A Hybrid Generalized Hidden Markov Model-Based Condition Monitoring Approach for Rolling Bearings
Liu, Jie; Hu, Youmin; Wu, Bo; Wang, Yan; Xie, Fengyun
2017-01-01
The operating condition of rolling bearings affects productivity and quality in the rotating machine process. Developing an effective rolling bearing condition monitoring approach is critical to accurately identify the operating condition. In this paper, a hybrid generalized hidden Markov model-based condition monitoring approach for rolling bearings is proposed, where interval valued features are used to efficiently recognize and classify machine states in the machine process. In the proposed method, vibration signals are decomposed into multiple modes with variational mode decomposition (VMD). Parameters of the VMD, in the form of generalized intervals, provide a concise representation for aleatory and epistemic uncertainty and improve the robustness of identification. The multi-scale permutation entropy method is applied to extract state features from the decomposed signals in different operating conditions. Traditional principal component analysis is adopted to reduce feature size and computational cost. With the extracted features’ information, the generalized hidden Markov model, based on generalized interval probability, is used to recognize and classify the fault types and fault severity levels. Finally, the experiment results show that the proposed method is effective at recognizing and classifying the fault types and fault severity levels of rolling bearings. This monitoring method is also efficient enough to quantify the two uncertainty components. PMID:28524088
Altieri, Nicholas; Pisoni, David B.; Townsend, James T.
2012-01-01
Summerfield (1987) proposed several accounts of audiovisual speech perception, a field of research that has burgeoned in recent years. The proposed accounts included the integration of discrete phonetic features, vectors describing the values of independent acoustical and optical parameters, the filter function of the vocal tract, and articulatory dynamics of the vocal tract. The latter two accounts assume that the representations of audiovisual speech perception are based on abstract gestures, while the former two assume that the representations consist of symbolic or featural information obtained from visual and auditory modalities. Recent converging evidence from several different disciplines reveals that the general framework of Summerfield’s feature-based theories should be expanded. An updated framework building upon the feature-based theories is presented. We propose a processing model arguing that auditory and visual brain circuits provide facilitatory information when the inputs are correctly timed, and that auditory and visual speech representations do not necessarily undergo translation into a common code during information processing. Future research on multisensory processing in speech perception should investigate the connections between auditory and visual brain regions, and utilize dynamic modeling tools to further understand the timing and information processing mechanisms involved in audiovisual speech integration. PMID:21968081
Altieri, Nicholas; Pisoni, David B; Townsend, James T
2011-01-01
Summerfield (1987) proposed several accounts of audiovisual speech perception, a field of research that has burgeoned in recent years. The proposed accounts included the integration of discrete phonetic features, vectors describing the values of independent acoustical and optical parameters, the filter function of the vocal tract, and articulatory dynamics of the vocal tract. The latter two accounts assume that the representations of audiovisual speech perception are based on abstract gestures, while the former two assume that the representations consist of symbolic or featural information obtained from visual and auditory modalities. Recent converging evidence from several different disciplines reveals that the general framework of Summerfield's feature-based theories should be expanded. An updated framework building upon the feature-based theories is presented. We propose a processing model arguing that auditory and visual brain circuits provide facilitatory information when the inputs are correctly timed, and that auditory and visual speech representations do not necessarily undergo translation into a common code during information processing. Future research on multisensory processing in speech perception should investigate the connections between auditory and visual brain regions, and utilize dynamic modeling tools to further understand the timing and information processing mechanisms involved in audiovisual speech integration.
A Fault Diagnosis Methodology for Gear Pump Based on EEMD and Bayesian Network
Liu, Zengkai; Liu, Yonghong; Shan, Hongkai; Cai, Baoping; Huang, Qing
2015-01-01
This paper proposes a fault diagnosis methodology for a gear pump based on the ensemble empirical mode decomposition (EEMD) method and the Bayesian network. Essentially, the presented scheme is a multi-source information fusion based methodology. Compared with the conventional fault diagnosis with only EEMD, the proposed method is able to take advantage of all useful information besides sensor signals. The presented diagnostic Bayesian network consists of a fault layer, a fault feature layer and a multi-source information layer. Vibration signals from sensor measurement are decomposed by the EEMD method and the energy of intrinsic mode functions (IMFs) are calculated as fault features. These features are added into the fault feature layer in the Bayesian network. The other sources of useful information are added to the information layer. The generalized three-layer Bayesian network can be developed by fully incorporating faults and fault symptoms as well as other useful information such as naked eye inspection and maintenance records. Therefore, diagnostic accuracy and capacity can be improved. The proposed methodology is applied to the fault diagnosis of a gear pump and the structure and parameters of the Bayesian network is established. Compared with artificial neural network and support vector machine classification algorithms, the proposed model has the best diagnostic performance when sensor data is used only. A case study has demonstrated that some information from human observation or system repair records is very helpful to the fault diagnosis. It is effective and efficient in diagnosing faults based on uncertain, incomplete information. PMID:25938760
A Fault Diagnosis Methodology for Gear Pump Based on EEMD and Bayesian Network.
Liu, Zengkai; Liu, Yonghong; Shan, Hongkai; Cai, Baoping; Huang, Qing
2015-01-01
This paper proposes a fault diagnosis methodology for a gear pump based on the ensemble empirical mode decomposition (EEMD) method and the Bayesian network. Essentially, the presented scheme is a multi-source information fusion based methodology. Compared with the conventional fault diagnosis with only EEMD, the proposed method is able to take advantage of all useful information besides sensor signals. The presented diagnostic Bayesian network consists of a fault layer, a fault feature layer and a multi-source information layer. Vibration signals from sensor measurement are decomposed by the EEMD method and the energy of intrinsic mode functions (IMFs) are calculated as fault features. These features are added into the fault feature layer in the Bayesian network. The other sources of useful information are added to the information layer. The generalized three-layer Bayesian network can be developed by fully incorporating faults and fault symptoms as well as other useful information such as naked eye inspection and maintenance records. Therefore, diagnostic accuracy and capacity can be improved. The proposed methodology is applied to the fault diagnosis of a gear pump and the structure and parameters of the Bayesian network is established. Compared with artificial neural network and support vector machine classification algorithms, the proposed model has the best diagnostic performance when sensor data is used only. A case study has demonstrated that some information from human observation or system repair records is very helpful to the fault diagnosis. It is effective and efficient in diagnosing faults based on uncertain, incomplete information.
Context-aware and locality-constrained coding for image categorization.
Xiao, Wenhua; Wang, Bin; Liu, Yu; Bao, Weidong; Zhang, Maojun
2014-01-01
Improving the coding strategy for BOF (Bag-of-Features) based feature design has drawn increasing attention in recent image categorization works. However, the ambiguity in coding procedure still impedes its further development. In this paper, we introduce a context-aware and locality-constrained Coding (CALC) approach with context information for describing objects in a discriminative way. It is generally achieved by learning a word-to-word cooccurrence prior to imposing context information over locality-constrained coding. Firstly, the local context of each category is evaluated by learning a word-to-word cooccurrence matrix representing the spatial distribution of local features in neighbor region. Then, the learned cooccurrence matrix is used for measuring the context distance between local features and code words. Finally, a coding strategy simultaneously considers locality in feature space and context space, while introducing the weight of feature is proposed. This novel coding strategy not only semantically preserves the information in coding, but also has the ability to alleviate the noise distortion of each class. Extensive experiments on several available datasets (Scene-15, Caltech101, and Caltech256) are conducted to validate the superiority of our algorithm by comparing it with baselines and recent published methods. Experimental results show that our method significantly improves the performance of baselines and achieves comparable and even better performance with the state of the arts.
Miller, Kristen; Mosby, Danielle; Capan, Muge; Kowalski, Rebecca; Ratwani, Raj; Noaiseh, Yaman; Kraft, Rachel; Schwartz, Sanford; Weintraub, William S; Arnold, Ryan
2018-05-01
Provider acceptance and associated patient outcomes are widely discussed in the evaluation of clinical decision support systems (CDSSs), but critical design criteria for tools have generally been overlooked. The objective of this work is to inform electronic health record alert optimization and clinical practice workflow by identifying, compiling, and reporting design recommendations for CDSS to support the efficient, effective, and timely delivery of high-quality care. A narrative review was conducted from 2000 to 2016 in PubMed and The Journal of Human Factors and Ergonomics Society to identify papers that discussed/recommended design features of CDSSs that are associated with the success of these systems. Fourteen papers were included as meeting the criteria and were found to have a total of 42 unique recommendations; 11 were classified as interface features, 10 as information features, and 21 as interaction features. Features are defined and described, providing actionable guidance that can be applied to CDSS development and policy. To our knowledge, no reviews have been completed that discuss/recommend design features of CDSS at this scale, and thus we found that this was important for the body of literature. The recommendations identified in this narrative review will help to optimize design, organization, management, presentation, and utilization of information through presentation, content, and function. The designation of 3 categories (interface, information, and interaction) should be further evaluated to determine the critical importance of the categories. Future work will determine how to prioritize them with limited resources for designers and developers in order to maximize the clinical utility of CDSS. This review will expand the field of knowledge and provide a novel organization structure to identify key recommendations for CDSS.
Lie, Lily; Shetty, Vishwas; Gupta, Karan; Polifka, Janine E; Markham, Glen; Albee, Sarah; Collins, Carol; Hsieh, Gary
2017-01-01
Healthcare providers (HCPs) caring for pregnant patients often need information on drug risks to the embryo or fetus, but such complex information takes time to find and is difficult to convey on an app. In this work, we first surveyed 167 HCPs to understand their current teratogen information-seeking practices to help inform our general design goals. Using the insights gained, we then designed a prototype of a mobile app and tested it with 22 HCPs. We learned that HCP ’s information needs in this context can be grouped into 3 types: to understand, to decide, and to explain. Different sets of information and features may be needed to support these different needs. Further, while some HCPs had concerns about appearing unprofessional and unknowledgeable when using the app in front of patients, many did not. They noted that incorporating mobile information apps into practice improves information access, can help signal care and technology-savviness, in addition to providing an opportunity to engage and educate patients. Implications for design and additional features for reference apps for HCPs are discussed. PMID:29854178
NASA Technical Reports Server (NTRS)
Heldmann, J. L.; Toon, O. B.; Pollard, W. H.; Mellon, M. T.; Pitlick, J.; McKay, C. P.; Andersen, D. T.
2005-01-01
Images from the Mars Orbiter Camera (MOC) on the Mars Global Surveyor (MGS) spacecraft show geologically young small-scale features resembling terrestrial water-carved gullies. An improved understanding of these features has the potential to reveal important information about the hydrological system on Mars, which is of general interest to the planetary science community as well as the field of astrobiology and the search for life on Mars. The young geologic age of these gullies is often thought to be a paradox because liquid water is unstable at the Martian surface. Current temperatures and pressures are generally below the triple point of water (273 K, 6.1 mbar) so that liquid water will spontaneously boil and/or freeze. We therefore examine the flow of water on Mars to determine what conditions are consistent with the observed features of the gullies.
The role of source memory in older adults' recollective experience.
Boywitt, C Dennis; Kuhlmann, Beatrice G; Meiser, Thorsten
2012-06-01
Younger adults' "remember" judgments are accompanied by better memory for the source of an item than "know" judgments. Furthermore, remember judgments are not merely associated with better memory for individual source features but also with bound memory for multiple source features. However, older adults, independent of their subjective memory experience, are generally less likely to "bind" source features to an item and to each other in memory (i.e., the associative deficit). In two experiments, we tested whether memory for perceptual source features, independently or bound, is also the basis for older adults' remember responses or if their associative deficit leads them to base their responses on other types of information. The results suggest that retrieval of perceptual source features, individually or bound, forms the basis for younger but not for older adults' remember judgments even when the overall level of memory for perceptual sources is closely equated (Experiment 1) and when attention is explicitly directed to the source information at encoding (Experiment 2). PsycINFO Database Record (c) 2012 APA, all rights reserved
How a central bank perceives the (visual) communication of security features on its banknotes
NASA Astrophysics Data System (ADS)
Tornare, Roland
1998-04-01
The banknotes of earlier generations were protected by two or three security features with which the general public was familiar: watermark, security thread, intaglio printing. The remaining features pleased primarily printers and central banks, with little thought being given to public perception. The philosophy adopted two decades ago was based on a certain measure of discretion. It required patience and perseverance to discover the built-in security features of the banknotes. When colour photocopiers appeared on the scene in the mid- eighties we were compelled to take precautionary measures to protect our banknotes. One such measure consisted of an information campaign to prepare ourselves for this new potential threat. At this point, we actually became fully aware of the complex design of our banknotes and how difficult it is to communicate clearly the difference between a genuine and a counterfeit banknote. This difficult experience has nevertheless been a great benefit. It badgered us continually during the initial phase of designing the banknotes and preparing the information campaign.
Graph-based Data Modeling and Analysis for Data Fusion in Remote Sensing
NASA Astrophysics Data System (ADS)
Fan, Lei
Hyperspectral imaging provides the capability of increased sensitivity and discrimination over traditional imaging methods by combining standard digital imaging with spectroscopic methods. For each individual pixel in a hyperspectral image (HSI), a continuous spectrum is sampled as the spectral reflectance/radiance signature to facilitate identification of ground cover and surface material. The abundant spectrum knowledge allows all available information from the data to be mined. The superior qualities within hyperspectral imaging allow wide applications such as mineral exploration, agriculture monitoring, and ecological surveillance, etc. The processing of massive high-dimensional HSI datasets is a challenge since many data processing techniques have a computational complexity that grows exponentially with the dimension. Besides, a HSI dataset may contain a limited number of degrees of freedom due to the high correlations between data points and among the spectra. On the other hand, merely taking advantage of the sampled spectrum of individual HSI data point may produce inaccurate results due to the mixed nature of raw HSI data, such as mixed pixels, optical interferences and etc. Fusion strategies are widely adopted in data processing to achieve better performance, especially in the field of classification and clustering. There are mainly three types of fusion strategies, namely low-level data fusion, intermediate-level feature fusion, and high-level decision fusion. Low-level data fusion combines multi-source data that is expected to be complementary or cooperative. Intermediate-level feature fusion aims at selection and combination of features to remove redundant information. Decision level fusion exploits a set of classifiers to provide more accurate results. The fusion strategies have wide applications including HSI data processing. With the fast development of multiple remote sensing modalities, e.g. Very High Resolution (VHR) optical sensors, LiDAR, etc., fusion of multi-source data can in principal produce more detailed information than each single source. On the other hand, besides the abundant spectral information contained in HSI data, features such as texture and shape may be employed to represent data points from a spatial perspective. Furthermore, feature fusion also includes the strategy of removing redundant and noisy features in the dataset. One of the major problems in machine learning and pattern recognition is to develop appropriate representations for complex nonlinear data. In HSI processing, a particular data point is usually described as a vector with coordinates corresponding to the intensities measured in the spectral bands. This vector representation permits the application of linear and nonlinear transformations with linear algebra to find an alternative representation of the data. More generally, HSI is multi-dimensional in nature and the vector representation may lose the contextual correlations. Tensor representation provides a more sophisticated modeling technique and a higher-order generalization to linear subspace analysis. In graph theory, data points can be generalized as nodes with connectivities measured from the proximity of a local neighborhood. The graph-based framework efficiently characterizes the relationships among the data and allows for convenient mathematical manipulation in many applications, such as data clustering, feature extraction, feature selection and data alignment. In this thesis, graph-based approaches applied in the field of multi-source feature and data fusion in remote sensing area are explored. We will mainly investigate the fusion of spatial, spectral and LiDAR information with linear and multilinear algebra under graph-based framework for data clustering and classification problems.
Harvesting geographic features from heterogeneous raster maps
NASA Astrophysics Data System (ADS)
Chiang, Yao-Yi
2010-11-01
Raster maps offer a great deal of geospatial information and are easily accessible compared to other geospatial data. However, harvesting geographic features locked in heterogeneous raster maps to obtain the geospatial information is challenging. This is because of the varying image quality of raster maps (e.g., scanned maps with poor image quality and computer-generated maps with good image quality), the overlapping geographic features in maps, and the typical lack of metadata (e.g., map geocoordinates, map source, and original vector data). Previous work on map processing is typically limited to a specific type of map and often relies on intensive manual work. In contrast, this thesis investigates a general approach that does not rely on any prior knowledge and requires minimal user effort to process heterogeneous raster maps. This approach includes automatic and supervised techniques to process raster maps for separating individual layers of geographic features from the maps and recognizing geographic features in the separated layers (i.e., detecting road intersections, generating and vectorizing road geometry, and recognizing text labels). The automatic technique eliminates user intervention by exploiting common map properties of how road lines and text labels are drawn in raster maps. For example, the road lines are elongated linear objects and the characters are small connected-objects. The supervised technique utilizes labels of road and text areas to handle complex raster maps, or maps with poor image quality, and can process a variety of raster maps with minimal user input. The results show that the general approach can handle raster maps with varying map complexity, color usage, and image quality. By matching extracted road intersections to another geospatial dataset, we can identify the geocoordinates of a raster map and further align the raster map, separated feature layers from the map, and recognized features from the layers with the geospatial dataset. The road vectorization and text recognition results outperform state-of-art commercial products, and with considerably less user input. The approach in this thesis allows us to make use of the geospatial information of heterogeneous maps locked in raster format.
"Is There An App For That?" Orthopaedic Patient Preferences For A Smartphone Application.
Datillo, Jonathan R; Gittings, Daniel J; Sloan, Matthew; Hardaker, William M; Deasey, Matthew J; Sheth, Neil P
2017-08-16
Patients are seeking out medical information on the Internet and utilizing smartphone health applications ("apps"). Smartphone use has exponentially increased among orthopaedic surgeons and patients. Despite this increase, patients are rarely directed to specific apps by physicians. No study exists querying patient preferences for a patient-centered, orthopaedic smartphone application. The purpose of this study is to 1) determine Internet use patterns amongst orthopaedic patients; 2) ascertain access to and use of smartphones; and 3) elucidate what features orthopaedic patients find most important in a smartphone application. We surveyed patients in an orthopaedic practice in an urban academic center to assess demographics, access to and patterns of Internet and Smartphone use, and preferences for features in a smartphone app. A total of 310 surveys were completed. Eighty percent of patients reported Internet access, and 62% used the Internet for health information. Seventy-seven percent owned smartphones, 45% used them for health information, and 28% owned health apps. Only 11% were referred to an app by a physician. The highest ranked features were appointment reminders, ability to view test results, communication with physicians, and discharge instructions. General orthopaedic information and pictures or videos explaining surgery were the 2 lowest ranked features. Seventy-one percent of patients felt an app with some of the described features would improve their healthcare experiences, and 40% would pay for the app. The smartphone is an under-utilized tool to enhance patient-physician communication, increase satisfaction, and improve quality of care. Patients were enthusiastic about app features that are often included in patient health portals, but ranked orthopaedic educational features lowest. Further study is required to elucidate how best to use orthopaedic apps as physician-directed educational opportunities to promote patient satisfaction and quality of care.
Wilderness Medicine Newsletter, Volume 5.
ERIC Educational Resources Information Center
Wilderness Medicine Newsletter, 1994
1994-01-01
This volume of newsletters addresses issues related to the treatment and prevention of medical emergencies in the wilderness. Each issue includes feature articles, book reviews, product reviews, letters to the editor, notices of upcoming wilderness conferences and training courses, additional resources, and general information relevant to medical…
Total Library Computerization for Windows.
ERIC Educational Resources Information Center
Combs, Joseph, Jr.
1999-01-01
Presents a general review of features of version 2.1 of Total Library Computerization (TLC) for Windows from On Point, Inc. Includes information about pricing, hardware and operating systems, modules/functions available, user interface, security, on-line catalog functions, circulation, cataloging, and documentation and online help. A table…
The Literacy Component of Mathematical and Scientific Literacy
ERIC Educational Resources Information Center
Yore, Larry D.; Pimm, David; Tuan, Hsiao-Lin
2007-01-01
This opening article of the Special Issue makes an argument for parallel definitions of scientific literacy and mathematical literacy that have shared features: importance of general cognitive and metacognitive abilities and reasoning/thinking and discipline-specific language, habits-of-mind/emotional dispositions, and information communication…
A semantic model for multimodal data mining in healthcare information systems.
Iakovidis, Dimitris; Smailis, Christos
2012-01-01
Electronic health records (EHRs) are representative examples of multimodal/multisource data collections; including measurements, images and free texts. The diversity of such information sources and the increasing amounts of medical data produced by healthcare institutes annually, pose significant challenges in data mining. In this paper we present a novel semantic model that describes knowledge extracted from the lowest-level of a data mining process, where information is represented by multiple features i.e. measurements or numerical descriptors extracted from measurements, images, texts or other medical data, forming multidimensional feature spaces. Knowledge collected by manual annotation or extracted by unsupervised data mining from one or more feature spaces is modeled through generalized qualitative spatial semantics. This model enables a unified representation of knowledge across multimodal data repositories. It contributes to bridging the semantic gap, by enabling direct links between low-level features and higher-level concepts e.g. describing body parts, anatomies and pathological findings. The proposed model has been developed in web ontology language based on description logics (OWL-DL) and can be applied to a variety of data mining tasks in medical informatics. It utility is demonstrated for automatic annotation of medical data.
A keyword spotting model using perceptually significant energy features
NASA Astrophysics Data System (ADS)
Umakanthan, Padmalochini
The task of a keyword recognition system is to detect the presence of certain words in a conversation based on the linguistic information present in human speech. Such keyword spotting systems have applications in homeland security, telephone surveillance and human-computer interfacing. General procedure of a keyword spotting system involves feature generation and matching. In this work, new set of features that are based on the psycho-acoustic masking nature of human speech are proposed. After developing these features a time aligned pattern matching process was implemented to locate the words in a set of unknown words. A word boundary detection technique based on frame classification using the nonlinear characteristics of speech is also addressed in this work. Validation of this keyword spotting model was done using widely acclaimed Cepstral features. The experimental results indicate the viability of using these perceptually significant features as an augmented feature set in keyword spotting.
Giraldo, Sergio I; Ramirez, Rafael
2016-01-01
Expert musicians introduce expression in their performances by manipulating sound properties such as timing, energy, pitch, and timbre. Here, we present a data driven computational approach to induce expressive performance rule models for note duration, onset, energy, and ornamentation transformations in jazz guitar music. We extract high-level features from a set of 16 commercial audio recordings (and corresponding music scores) of jazz guitarist Grant Green in order to characterize the expression in the pieces. We apply machine learning techniques to the resulting features to learn expressive performance rule models. We (1) quantitatively evaluate the accuracy of the induced models, (2) analyse the relative importance of the considered musical features, (3) discuss some of the learnt expressive performance rules in the context of previous work, and (4) assess their generailty. The accuracies of the induced predictive models is significantly above base-line levels indicating that the audio performances and the musical features extracted contain sufficient information to automatically learn informative expressive performance patterns. Feature analysis shows that the most important musical features for predicting expressive transformations are note duration, pitch, metrical strength, phrase position, Narmour structure, and tempo and key of the piece. Similarities and differences between the induced expressive rules and the rules reported in the literature were found. Differences may be due to the fact that most previously studied performance data has consisted of classical music recordings. Finally, the rules' performer specificity/generality is assessed by applying the induced rules to performances of the same pieces performed by two other professional jazz guitar players. Results show a consistency in the ornamentation patterns between Grant Green and the other two musicians, which may be interpreted as a good indicator for generality of the ornamentation rules.
Giraldo, Sergio I.; Ramirez, Rafael
2016-01-01
Expert musicians introduce expression in their performances by manipulating sound properties such as timing, energy, pitch, and timbre. Here, we present a data driven computational approach to induce expressive performance rule models for note duration, onset, energy, and ornamentation transformations in jazz guitar music. We extract high-level features from a set of 16 commercial audio recordings (and corresponding music scores) of jazz guitarist Grant Green in order to characterize the expression in the pieces. We apply machine learning techniques to the resulting features to learn expressive performance rule models. We (1) quantitatively evaluate the accuracy of the induced models, (2) analyse the relative importance of the considered musical features, (3) discuss some of the learnt expressive performance rules in the context of previous work, and (4) assess their generailty. The accuracies of the induced predictive models is significantly above base-line levels indicating that the audio performances and the musical features extracted contain sufficient information to automatically learn informative expressive performance patterns. Feature analysis shows that the most important musical features for predicting expressive transformations are note duration, pitch, metrical strength, phrase position, Narmour structure, and tempo and key of the piece. Similarities and differences between the induced expressive rules and the rules reported in the literature were found. Differences may be due to the fact that most previously studied performance data has consisted of classical music recordings. Finally, the rules' performer specificity/generality is assessed by applying the induced rules to performances of the same pieces performed by two other professional jazz guitar players. Results show a consistency in the ornamentation patterns between Grant Green and the other two musicians, which may be interpreted as a good indicator for generality of the ornamentation rules. PMID:28066290
Ding, Liya; Martinez, Aleix M
2010-11-01
The appearance-based approach to face detection has seen great advances in the last several years. In this approach, we learn the image statistics describing the texture pattern (appearance) of the object class we want to detect, e.g., the face. However, this approach has had limited success in providing an accurate and detailed description of the internal facial features, i.e., eyes, brows, nose, and mouth. In general, this is due to the limited information carried by the learned statistical model. While the face template is relatively rich in texture, facial features (e.g., eyes, nose, and mouth) do not carry enough discriminative information to tell them apart from all possible background images. We resolve this problem by adding the context information of each facial feature in the design of the statistical model. In the proposed approach, the context information defines the image statistics most correlated with the surroundings of each facial component. This means that when we search for a face or facial feature, we look for those locations which most resemble the feature yet are most dissimilar to its context. This dissimilarity with the context features forces the detector to gravitate toward an accurate estimate of the position of the facial feature. Learning to discriminate between feature and context templates is difficult, however, because the context and the texture of the facial features vary widely under changing expression, pose, and illumination, and may even resemble one another. We address this problem with the use of subclass divisions. We derive two algorithms to automatically divide the training samples of each facial feature into a set of subclasses, each representing a distinct construction of the same facial component (e.g., closed versus open eyes) or its context (e.g., different hairstyles). The first algorithm is based on a discriminant analysis formulation. The second algorithm is an extension of the AdaBoost approach. We provide extensive experimental results using still images and video sequences for a total of 3,930 images. We show that the results are almost as good as those obtained with manual detection.
Pharmacy Information Systems in Teaching Hospitals: A Multi-dimensional Evaluation Study.
Kazemi, Alireza; Rabiei, Reza; Moghaddasi, Hamid; Deimazar, Ghasem
2016-07-01
In hospitals, the pharmacy information system (PIS) is usually a sub-system of the hospital information system (HIS). The PIS supports the distribution and management of drugs, shows drug and medical device inventory, and facilitates preparing needed reports. In this study, pharmacy information systems implemented in general teaching hospitals affiliated to medical universities in Tehran (Iran) were evaluated using a multi-dimensional tool. This was an evaluation study conducted in 2015. To collect data, a checklist was developed by reviewing the relevant literature; this checklist included both general and specific criteria to evaluate pharmacy information systems. The checklist was then validated by medical informatics experts and pharmacists. The sample of the study included five PIS in general-teaching hospitals affiliated to three medical universities in Tehran (Iran). Data were collected using the checklist and through observing the systems. The findings were presented as tables. Five PIS were evaluated in the five general-teaching hospitals that had the highest bed numbers. The findings showed that the evaluated pharmacy information systems lacked some important general and specific criteria. Among the general evaluation criteria, it was found that only two of the PIS studied were capable of restricting repeated attempts made for unauthorized access to the systems. With respect to the specific evaluation criteria, no attention was paid to the patient safety aspect. The PIS studied were mainly designed to support financial tasks; little attention was paid to clinical and patient safety features.
Information processing of motion in facial expression and the geometry of dynamical systems
NASA Astrophysics Data System (ADS)
Assadi, Amir H.; Eghbalnia, Hamid; McMenamin, Brenton W.
2005-01-01
An interesting problem in analysis of video data concerns design of algorithms that detect perceptually significant features in an unsupervised manner, for instance methods of machine learning for automatic classification of human expression. A geometric formulation of this genre of problems could be modeled with help of perceptual psychology. In this article, we outline one approach for a special case where video segments are to be classified according to expression of emotion or other similar facial motions. The encoding of realistic facial motions that convey expression of emotions for a particular person P forms a parameter space XP whose study reveals the "objective geometry" for the problem of unsupervised feature detection from video. The geometric features and discrete representation of the space XP are independent of subjective evaluations by observers. While the "subjective geometry" of XP varies from observer to observer, levels of sensitivity and variation in perception of facial expressions appear to share a certain level of universality among members of similar cultures. Therefore, statistical geometry of invariants of XP for a sample of population could provide effective algorithms for extraction of such features. In cases where frequency of events is sufficiently large in the sample data, a suitable framework could be provided to facilitate the information-theoretic organization and study of statistical invariants of such features. This article provides a general approach to encode motion in terms of a particular genre of dynamical systems and the geometry of their flow. An example is provided to illustrate the general theory.
Development of a general-purpose, integrated knowledge capture and delivery system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roberts, A.G.; Freer, E.B.
1991-01-01
KATIE (Knowledge-Based Assistant for Troubleshooting Industrial Equipment) was first conceived as a solution for maintenance problems. In the area of process control, maintenance technicians have become responsible for increasingly complicated equipment and an overwhelming amount of associated information. The sophisticated distributed control systems have proven to be such a drastic change for technicians that they are forced to rely on the engineer for troubleshooting guidance. Because it is difficult for a knowledgeable engineer to be readily available for troubleshooting,maintenance personnel wish to capture the information provided by the engineer. The solution provided has two stages. First, a specific complicated systemmore » was chosen as a test case. An effort was made to gather all available system information in some form. Second, a method of capturing and delivering this collection of information was developed. Several features were desired for this knowledge capture/delivery system (KATIE). Creation of the knowledge base needed to be independent of the delivery system. The delivery path need to be as simple as possible for the technician, and the capture, or authoring, system could provide very sophisticated features. It was decided that KATIE should be as general as possible, not internalizing specifics about the first implementation. The knowledge bases created needed to be completely separate from KATIE needed to have a modular structure so that each type of information (rules, procedures, manuals, symptoms) could be encapsulated individually.« less
High-dimensional cluster analysis with the Masked EM Algorithm
Kadir, Shabnam N.; Goodman, Dan F. M.; Harris, Kenneth D.
2014-01-01
Cluster analysis faces two problems in high dimensions: first, the “curse of dimensionality” that can lead to overfitting and poor generalization performance; and second, the sheer time taken for conventional algorithms to process large amounts of high-dimensional data. We describe a solution to these problems, designed for the application of “spike sorting” for next-generation high channel-count neural probes. In this problem, only a small subset of features provide information about the cluster member-ship of any one data vector, but this informative feature subset is not the same for all data points, rendering classical feature selection ineffective. We introduce a “Masked EM” algorithm that allows accurate and time-efficient clustering of up to millions of points in thousands of dimensions. We demonstrate its applicability to synthetic data, and to real-world high-channel-count spike sorting data. PMID:25149694
Evaluation of personal digital assistant drug information databases for the managed care pharmacist.
Lowry, Colleen M; Kostka-Rokosz, Maria D; McCloskey, William W
2003-01-01
Personal digital assistants (PDAs) are becoming a necessity for practicing pharmacists. They offer a time-saving and convenient way to obtain current drug information. Several software companies now offer general drug information databases for use on hand held computers. PDAs priced less than 200 US dollars often have limited memory capacity; therefore, the user must choose from a growing list of general drug information database options in order to maximize utility without exceeding memory capacity. This paper reviews the attributes of available general drug information software databases for the PDA. It provides information on the content, advantages, limitations, pricing, memory requirements, and accessibility of drug information software databases. Ten drug information databases were subjectively analyzed and evaluated based on information from the product.s Web site, vendor Web sites, and from our experience. Some of these databases have attractive auxiliary features such as kinetics calculators, disease references, drug-drug and drug-herb interaction tools, and clinical guidelines, which may make them more useful to the PDA user. Not all drug information databases are equal with regard to content, author credentials, frequency of updates, and memory requirements. The user must therefore evaluate databases for completeness, currency, and cost effectiveness before purchase. In addition, consideration should be given to the ease of use and flexibility of individual programs.
Use of Patient Portals for Personal Health Information Management: The Older Adult Perspective
Turner, Anne M.; Osterhage, Katie; Hartzler, Andrea; Joe, Jonathan; Lin, Lorelei; Kanagat, Natasha; Demiris, George
2015-01-01
The personal health information management (PHIM) practices and needs of older adults are poorly understood. We describe initial results from the UW SOARING project (Studying Older Adults & Researching Information Needs and Goals), a participatory design investigation of PHIM in older adults (60 years and older). We conducted in-depth interviews with older adults (n=74) living in a variety of residential settings about their management of personal health information. A surprising 20% of participants report using patient portals and another 16% reported prior use or anticipated use of portals in the future. Participants cite ease of access to health information and direct communication with providers as valuable portal features. Barriers to the use of patient portals include a general lack of computer proficiency, high internet costs and security concerns. Design features based on consideration of needs and practices of older adults will facilitate appeal and maximize usability; both are elements critical to adoption of tools such as patient portals that can support older adults and PHIM. PMID:26958263
Water resources of Duval County, Florida
Phelps, G.G.
1994-01-01
The report describes the hydrology and water resources of Duval County, the development of its water supplies, and water use within the county. Also included are descriptions of various natural features of the county (such as topography and geology), an explanation of the hydrologic cycle, and an interpretation of the relationship between them. Ground-water and surface-water resources and principal water-quality features within the county are also discussed. The report is intended to provide the general public with an overview of the water resources Of Duval County, and to increase public awareness of water issues. Information is presented in nontechnical language to enable the general reader to understand facts about water as a part of nature, and the problems associated with its development and use.
Optimal Prediction in the Retina and Natural Motion Statistics
NASA Astrophysics Data System (ADS)
Salisbury, Jared M.; Palmer, Stephanie E.
2016-03-01
Almost all behaviors involve making predictions. Whether an organism is trying to catch prey, avoid predators, or simply move through a complex environment, the organism uses the data it collects through its senses to guide its actions by extracting from these data information about the future state of the world. A key aspect of the prediction problem is that not all features of the past sensory input have predictive power, and representing all features of the external sensory world is prohibitively costly both due to space and metabolic constraints. This leads to the hypothesis that neural systems are optimized for prediction. Here we describe theoretical and computational efforts to define and quantify the efficient representation of the predictive information by the brain. Another important feature of the prediction problem is that the physics of the world is diverse enough to contain a wide range of possible statistical ensembles, yet not all inputs are probable. Thus, the brain might not be a generalized predictive machine; it might have evolved to specifically solve the prediction problems most common in the natural environment. This paper summarizes recent results on predictive coding and optimal predictive information in the retina and suggests approaches for quantifying prediction in response to natural motion. Basic statistics of natural movies reveal that general patterns of spatiotemporal correlation are present across a wide range of scenes, though individual differences in motion type may be important for optimal processing of motion in a given ecological niche.
Qudit hypergraph states and their properties
NASA Astrophysics Data System (ADS)
Xiong, Fei-Lei; Zhen, Yi-Zheng; Cao, Wen-Fei; Chen, Kai; Chen, Zeng-Bing
2018-01-01
Hypergraph states, a generalization of graph states, constitute a large class of quantum states with intriguing nonlocal properties, and they have promising applications in quantum information science and technology. In this paper, we study some features of an independently proposed generalization of hypergraph states to qudit hypergraph states, i.e., each vertex in the generalized hypergraph (multi-hypergraph) represents a d -level system instead of a two-level one. It is shown that multi-hypergraphs and d -level hypergraph states have a one-to-one correspondence, and the structure of a multi-hypergraph exhibits the entanglement property of the corresponding quantum state. We discuss their relationship with some well-known state classes, e.g., real equally weighted states and stabilizer states. The Bell nonlocality, an important resource in fulfilling many quantum information tasks, is also investigated.
Point Counterpoint: Teaching Punctuation as Information Management.
ERIC Educational Resources Information Center
Mann, Nancy
2003-01-01
Argues that the punctuation system does have features that generally make systems learnable, such as binary contrasts, limitation of parallel categories to seven or fewer options, and repeated application of the same criterion to different kinds of entities. Concludes that the simplicity that allows some readers to learn this system unconsciously…
Rapporteur-General's Oral Report.
ERIC Educational Resources Information Center
Wiltshire, Kenneth
An international congress produced discussion and debate regarding the factors that will shape technical and vocational education in the new century and the new millennium. Speakers declared that the 21st century will be an era of knowledge, information, and civilization. According to the speakers, key features of the 21st century will include…
NASA Technical Reports Server (NTRS)
Boggs, Karen; Gutheinz, Sandy C.; Watanabe, Susan M.; Oks, Boris; Arca, Jeremy M.; Stanboli, Alice; Peez, Martin; Whatmore, Rebecca; Kang, Minliang; Espinoza, Luis A.
2010-01-01
Space Images for NASA/JPL is an Apple iPhone application that allows the general public to access featured images from the Jet Propulsion Laboratory (JPL). A back-end infrastructure stores, tracks, and retrieves space images from the JPL Photojournal Web server, and catalogs the information into a streamlined rating infrastructure.
Reduction in Force. [and] Teacher Burnout.
ERIC Educational Resources Information Center
Dialogue: A Review of Labor-Management Cooperation in Public Education, 1984
1984-01-01
"Dialogue" is a review of labor-management cooperation in public education, whose goal is to provide teachers, administrators, school boards, and labor relations practitioners with analyses of critical issues, information about current projects, reviews of relevant literature, and a variety of special features. Each issue is generally devoted to a…
Nuclear Power Plants. Revised.
ERIC Educational Resources Information Center
Lyerly, Ray L.; Mitchell, Walter, III
This publication is one of a series of information booklets for the general public published by the United States Atomic Energy Commission. Among the topics discussed are: Why Use Nuclear Power?; From Atoms to Electricity; Reactor Types; Typical Plant Design Features; The Cost of Nuclear Power; Plants in the United States; Developments in Foreign…
Population Education Accessions List. January-April, 1999.
ERIC Educational Resources Information Center
United Nations Educational, Scientific, and Cultural Organization, Bangkok (Thailand). Regional Office for Education in Asia and the Pacific.
This document features output from a computerized bibliographic database. The list categorizes entries into three parts. Part I, Population Education, consists of titles that address various aspects of population education arranged by country in the first section and general materials in the second. Part II, Knowledge Base Information, consists of…
Word Recognition: Theoretical Issues and Instructional Hints.
ERIC Educational Resources Information Center
Smith, Edward E.; Kleiman, Glenn M.
Research on adult readers' word recognition skills is used in this paper to develop a general information processing model of reading. Stages of the model include feature extraction, interpretation, lexical access, working memory, and integration. Of those stages, particular attention is given to the units of interpretation, speech recoding and…
Count Me In: Resource Manual on Disabilities.
ERIC Educational Resources Information Center
Milota, Cathy; And Others
This resource guide presents general information about disabilities and summaries of relevant federal laws. A question-and-answer format is used to highlight key features of the Education for All Handicapped Children Act (Public Law 94-142, reauthorized in 1990 as the Individuals with Disabilities Education Act); Section 504 of the Rehabilitation…
E&V (Evaluation and Validation) Reference Manual, Version 1.0.
1988-07-01
references featured in the Reference Manual. G-05097a GENERAL REFERENCE INFORMATION EXTRACTED , FROM * INDEXES AND CROSS REFERENCES CHAPTER 4...at E&V techniques through many different paths, and provides a means to extract useful information along the way. /^c^^s; /r^ ^yr*•**•»» * L...electronically (preferred) to szymansk@ajpo.sei.cmu.edu or by regular mail to Mr. Raymond Szymanski . AFWAUAAAF, Wright Patterson AFB, OH 45433-6543. ES-2
Learning discriminative functional network features of schizophrenia
NASA Astrophysics Data System (ADS)
Gheiratmand, Mina; Rish, Irina; Cecchi, Guillermo; Brown, Matthew; Greiner, Russell; Bashivan, Pouya; Polosecki, Pablo; Dursun, Serdar
2017-03-01
Associating schizophrenia with disrupted functional connectivity is a central idea in schizophrenia research. However, identifying neuroimaging-based features that can serve as reliable "statistical biomarkers" of the disease remains a challenging open problem. We argue that generalization accuracy and stability of candidate features ("biomarkers") must be used as additional criteria on top of standard significance tests in order to discover more robust biomarkers. Generalization accuracy refers to the utility of biomarkers for making predictions about individuals, for example discriminating between patients and controls, in novel datasets. Feature stability refers to the reproducibility of the candidate features across different datasets. Here, we extracted functional connectivity network features from fMRI data at both high-resolution (voxel-level) and a spatially down-sampled lower-resolution ("supervoxel" level). At the supervoxel level, we used whole-brain network links, while at the voxel level, due to the intractably large number of features, we sampled a subset of them. We compared statistical significance, stability and discriminative utility of both feature types in a multi-site fMRI dataset, composed of schizophrenia patients and healthy controls. For both feature types, a considerable fraction of features showed significant differences between the two groups. Also, both feature types were similarly stable across multiple data subsets. However, the whole-brain supervoxel functional connectivity features showed a higher cross-validation classification accuracy of 78.7% vs. 72.4% for the voxel-level features. Cross-site variability and heterogeneity in the patient samples in the multi-site FBIRN dataset made the task more challenging compared to single-site studies. The use of the above methodology in combination with the fully data-driven approach using the whole brain information have the potential to shed light on "biomarker discovery" in schizophrenia.
DARHT Multi-intelligence Seismic and Acoustic Data Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stevens, Garrison Nicole; Van Buren, Kendra Lu; Hemez, Francois M.
The purpose of this report is to document the analysis of seismic and acoustic data collected at the Dual-Axis Radiographic Hydrodynamic Test (DARHT) facility at Los Alamos National Laboratory for robust, multi-intelligence decision making. The data utilized herein is obtained from two tri-axial seismic sensors and three acoustic sensors, resulting in a total of nine data channels. The goal of this analysis is to develop a generalized, automated framework to determine internal operations at DARHT using informative features extracted from measurements collected external of the facility. Our framework involves four components: (1) feature extraction, (2) data fusion, (3) classification, andmore » finally (4) robustness analysis. Two approaches are taken for extracting features from the data. The first of these, generic feature extraction, involves extraction of statistical features from the nine data channels. The second approach, event detection, identifies specific events relevant to traffic entering and leaving the facility as well as explosive activities at DARHT and nearby explosive testing sites. Event detection is completed using a two stage method, first utilizing signatures in the frequency domain to identify outliers and second extracting short duration events of interest among these outliers by evaluating residuals of an autoregressive exogenous time series model. Features extracted from each data set are then fused to perform analysis with a multi-intelligence paradigm, where information from multiple data sets are combined to generate more information than available through analysis of each independently. The fused feature set is used to train a statistical classifier and predict the state of operations to inform a decision maker. We demonstrate this classification using both generic statistical features and event detection and provide a comparison of the two methods. Finally, the concept of decision robustness is presented through a preliminary analysis where uncertainty is added to the system through noise in the measurements.« less
Deep and Structured Robust Information Theoretic Learning for Image Analysis.
Deng, Yue; Bao, Feng; Deng, Xuesong; Wang, Ruiping; Kong, Youyong; Dai, Qionghai
2016-07-07
This paper presents a robust information theoretic (RIT) model to reduce the uncertainties, i.e. missing and noisy labels, in general discriminative data representation tasks. The fundamental pursuit of our model is to simultaneously learn a transformation function and a discriminative classifier that maximize the mutual information of data and their labels in the latent space. In this general paradigm, we respectively discuss three types of the RIT implementations with linear subspace embedding, deep transformation and structured sparse learning. In practice, the RIT and deep RIT are exploited to solve the image categorization task whose performances will be verified on various benchmark datasets. The structured sparse RIT is further applied to a medical image analysis task for brain MRI segmentation that allows group-level feature selections on the brain tissues.
Ultra-wideband three-dimensional optoacoustic tomography.
Gateau, Jérôme; Chekkoury, Andrei; Ntziachristos, Vasilis
2013-11-15
Broadband optoacoustic waves generated by biological tissues excited with nanosecond laser pulses carry information corresponding to a wide range of geometrical scales. Typically, the frequency content present in the signals generated during optoacoustic imaging is much larger compared to the frequency band captured by common ultrasonic detectors, the latter typically acting as bandpass filters. To image optical absorption within structures ranging from entire organs to microvasculature in three dimensions, we implemented optoacoustic tomography with two ultrasound linear arrays featuring a center frequency of 6 and 24 MHz, respectively. In the present work, we show that complementary information on anatomical features could be retrieved and provide a better understanding on the localization of structures in the general anatomy by analyzing multi-bandwidth datasets acquired on a freshly excised kidney.
Barbara, Angela M; Dobbins, Maureen; Haynes, R Brian; Iorio, Alfonso; Lavis, John N; Raina, Parminder; Levinson, Anthony J
2016-05-11
Increasingly, older adults and their informal caregivers are using the Internet to search for health-related information. There is a proliferation of health information online, but the quality of this information varies, often based on exaggerated or dramatic findings, and not easily comprehended by consumers. The McMaster Optimal Aging Portal (Portal) was developed to provide Internet users with high-quality evidence about aging and address some of these current limitations of health information posted online. The Portal includes content for health professionals coming from three best-in-class resources (MacPLUS, Health Evidence, and Health Systems Evidence) and four types of content specifically prepared for the general public (Evidence Summaries, Web Resource Ratings, Blog Posts, and Twitter messages). Our objectives were to share the findings of the usability evaluation of the Portal with particular focus on the content features for the general public and to inform designers of health information websites and online resources for older adults about key usability themes. Data analysis included task performance during usability testing and qualitative content analyses of both the usability sessions and interviews to identify core themes. A total of 37 participants took part in 33 usability testing sessions and 21 focused interviews. Qualitative analysis revealed common themes regarding the Portal's strengths and challenges to usability. The strengths of the website were related to credibility, applicability, browsing function, design, and accessibility. The usability challenges included reluctance to register, process of registering, searching, terminology, and technical features. The study reinforced the importance of including end users during the development of this unique, dynamic, evidence-based health information website. The feedback was applied to iteratively improve website usability. Our findings can be applied by designers of health-related websites.
Dobbins, Maureen; Haynes, R. Brian; Iorio, Alfonso; Lavis, John N; Raina, Parminder
2016-01-01
Background Increasingly, older adults and their informal caregivers are using the Internet to search for health-related information. There is a proliferation of health information online, but the quality of this information varies, often based on exaggerated or dramatic findings, and not easily comprehended by consumers. The McMaster Optimal Aging Portal (Portal) was developed to provide Internet users with high-quality evidence about aging and address some of these current limitations of health information posted online. The Portal includes content for health professionals coming from three best-in-class resources (MacPLUS, Health Evidence, and Health Systems Evidence) and four types of content specifically prepared for the general public (Evidence Summaries, Web Resource Ratings, Blog Posts, and Twitter messages). Objective Our objectives were to share the findings of the usability evaluation of the Portal with particular focus on the content features for the general public and to inform designers of health information websites and online resources for older adults about key usability themes. Methods Data analysis included task performance during usability testing and qualitative content analyses of both the usability sessions and interviews to identify core themes. Results A total of 37 participants took part in 33 usability testing sessions and 21 focused interviews. Qualitative analysis revealed common themes regarding the Portal’s strengths and challenges to usability. The strengths of the website were related to credibility, applicability, browsing function, design, and accessibility. The usability challenges included reluctance to register, process of registering, searching, terminology, and technical features. Conclusions The study reinforced the importance of including end users during the development of this unique, dynamic, evidence-based health information website. The feedback was applied to iteratively improve website usability. Our findings can be applied by designers of health-related websites. PMID:27170443
Fine-Granularity Functional Interaction Signatures for Characterization of Brain Conditions
Hu, Xintao; Zhu, Dajiang; Lv, Peili; Li, Kaiming; Han, Junwei; Wang, Lihong; Shen, Dinggang; Guo, Lei; Liu, Tianming
2014-01-01
In the human brain, functional activity occurs at multiple spatial scales. Current studies on functional brain networks and their alterations in brain diseases via resting-state functional magnetic resonance imaging (rs-fMRI) are generally either at local scale (regionally confined analysis and inter-regional functional connectivity analysis) or at global scale (graph theoretic analysis). In contrast, inferring functional interaction at fine-granularity sub-network scale has not been adequately explored yet. Here our hypothesis is that functional interaction measured at fine-granularity subnetwork scale can provide new insight into the neural mechanisms of neurological and psychological conditions, thus offering complementary information for healthy and diseased population classification. In this paper, we derived fine-granularity functional interaction (FGFI) signatures in subjects with Mild Cognitive Impairment (MCI) and Schizophrenia by diffusion tensor imaging (DTI) and rsfMRI, and used patient-control classification experiments to evaluate the distinctiveness of the derived FGFI features. Our experimental results have shown that the FGFI features alone can achieve comparable classification performance compared with the commonly used inter-regional connectivity features. However, the classification performance can be substantially improved when FGFI features and inter-regional connectivity features are integrated, suggesting the complementary information achieved from the FGFI signatures. PMID:23319242
Fault detection and diagnosis for gas turbines based on a kernelized information entropy model.
Wang, Weiying; Xu, Zhiqiang; Tang, Rui; Li, Shuying; Wu, Wei
2014-01-01
Gas turbines are considered as one kind of the most important devices in power engineering and have been widely used in power generation, airplanes, and naval ships and also in oil drilling platforms. However, they are monitored without man on duty in the most cases. It is highly desirable to develop techniques and systems to remotely monitor their conditions and analyze their faults. In this work, we introduce a remote system for online condition monitoring and fault diagnosis of gas turbine on offshore oil well drilling platforms based on a kernelized information entropy model. Shannon information entropy is generalized for measuring the uniformity of exhaust temperatures, which reflect the overall states of the gas paths of gas turbine. In addition, we also extend the entropy to compute the information quantity of features in kernel spaces, which help to select the informative features for a certain recognition task. Finally, we introduce the information entropy based decision tree algorithm to extract rules from fault samples. The experiments on some real-world data show the effectiveness of the proposed algorithms.
Fault Detection and Diagnosis for Gas Turbines Based on a Kernelized Information Entropy Model
Wang, Weiying; Xu, Zhiqiang; Tang, Rui; Li, Shuying; Wu, Wei
2014-01-01
Gas turbines are considered as one kind of the most important devices in power engineering and have been widely used in power generation, airplanes, and naval ships and also in oil drilling platforms. However, they are monitored without man on duty in the most cases. It is highly desirable to develop techniques and systems to remotely monitor their conditions and analyze their faults. In this work, we introduce a remote system for online condition monitoring and fault diagnosis of gas turbine on offshore oil well drilling platforms based on a kernelized information entropy model. Shannon information entropy is generalized for measuring the uniformity of exhaust temperatures, which reflect the overall states of the gas paths of gas turbine. In addition, we also extend the entropy to compute the information quantity of features in kernel spaces, which help to select the informative features for a certain recognition task. Finally, we introduce the information entropy based decision tree algorithm to extract rules from fault samples. The experiments on some real-world data show the effectiveness of the proposed algorithms. PMID:25258726
Active transportation safety features around schools in Canada.
Pinkerton, Bryn; Rosu, Andrei; Janssen, Ian; Pickett, William
2013-10-31
The purpose of this study was to describe the presence and quality of active transportation safety features in Canadian school environments that relate to pedestrian and bicycle safety. Variations in these features and associated traffic concerns as perceived by school administrators were examined by geographic status and school type. The study was based on schools that participated in 2009/2010 Health Behaviour in School-aged Children (HBSC) survey. ArcGIS software version 10 and Google Earth were used to assess the presence and quality of ten different active transportation safety features. Findings suggest that there are crosswalks and good sidewalk coverage in the environments surrounding most Canadian schools, but a dearth of bicycle lanes and other traffic calming measures (e.g., speed bumps, traffic chokers). Significant urban/rural inequities exist with a greater prevalence of sidewalk coverage, crosswalks, traffic medians, and speed bumps in urban areas. With the exception of bicycle lanes, the active transportation safety features that were present were generally rated as high quality. Traffic was more of a concern to administrators in urban areas. This study provides novel information about active transportation safety features in Canadian school environments. This information could help guide public health efforts aimed at increasing active transportation levels while simultaneously decreasing active transportation injuries.
Active Transportation Safety Features around Schools in Canada
Pinkerton, Bryn; Rosu, Andrei; Janssen, Ian; Pickett, William
2013-01-01
The purpose of this study was to describe the presence and quality of active transportation safety features in Canadian school environments that relate to pedestrian and bicycle safety. Variations in these features and associated traffic concerns as perceived by school administrators were examined by geographic status and school type. The study was based on schools that participated in 2009/2010 Health Behaviour in School-aged Children (HBSC) survey. ArcGIS software version 10 and Google Earth were used to assess the presence and quality of ten different active transportation safety features. Findings suggest that there are crosswalks and good sidewalk coverage in the environments surrounding most Canadian schools, but a dearth of bicycle lanes and other traffic calming measures (e.g., speed bumps, traffic chokers). Significant urban/rural inequities exist with a greater prevalence of sidewalk coverage, crosswalks, traffic medians, and speed bumps in urban areas. With the exception of bicycle lanes, the active transportation safety features that were present were generally rated as high quality. Traffic was more of a concern to administrators in urban areas. This study provides novel information about active transportation safety features in Canadian school environments. This information could help guide public health efforts aimed at increasing active transportation levels while simultaneously decreasing active transportation injuries. PMID:24185844
Information models of software productivity - Limits on productivity growth
NASA Technical Reports Server (NTRS)
Tausworthe, Robert C.
1992-01-01
Research into generalized information-metric models of software process productivity establishes quantifiable behavior and theoretical bounds. The models establish a fundamental mathematical relationship between software productivity and the human capacity for information traffic, the software product yield (system size), information efficiency, and tool and process efficiencies. An upper bound is derived that quantifies average software productivity and the maximum rate at which it may grow. This bound reveals that ultimately, when tools, methodologies, and automated assistants have reached their maximum effective state, further improvement in productivity can only be achieved through increasing software reuse. The reuse advantage is shown not to increase faster than logarithmically in the number of reusable features available. The reuse bound is further shown to be somewhat dependent on the reuse policy: a general 'reuse everything' policy can lead to a somewhat slower productivity growth than a specialized reuse policy.
Statistical molecular design of balanced compound libraries for QSAR modeling.
Linusson, A; Elofsson, M; Andersson, I E; Dahlgren, M K
2010-01-01
A fundamental step in preclinical drug development is the computation of quantitative structure-activity relationship (QSAR) models, i.e. models that link chemical features of compounds with activities towards a target macromolecule associated with the initiation or progression of a disease. QSAR models are computed by combining information on the physicochemical and structural features of a library of congeneric compounds, typically assembled from two or more building blocks, and biological data from one or more in vitro assays. Since the models provide information on features affecting the compounds' biological activity they can be used as guides for further optimization. However, in order for a QSAR model to be relevant to the targeted disease, and drug development in general, the compound library used must contain molecules with balanced variation of the features spanning the chemical space believed to be important for interaction with the biological target. In addition, the assays used must be robust and deliver high quality data that are directly related to the function of the biological target and the associated disease state. In this review, we discuss and exemplify the concept of statistical molecular design (SMD) in the selection of building blocks and final synthetic targets (i.e. compounds to synthesize) to generate information-rich, balanced libraries for biological testing and computation of QSAR models.
Enhanced facial recognition for thermal imagery using polarimetric imaging.
Gurton, Kristan P; Yuffa, Alex J; Videen, Gorden W
2014-07-01
We present a series of long-wave-infrared (LWIR) polarimetric-based thermal images of facial profiles in which polarization-state information of the image-forming radiance is retained and displayed. The resultant polarimetric images show enhanced facial features, additional texture, and details that are not present in corresponding conventional thermal imagery. It has been generally thought that conventional thermal imagery (MidIR or LWIR) could not produce the detailed spatial information required for reliable human identification due to the so-called "ghosting" effect often seen in thermal imagery of human subjects. By using polarimetric information, we are able to extract subtle surface features of the human face, thus improving subject identification. Polarimetric image sets considered include the conventional thermal intensity image, S0, the two Stokes images, S1 and S2, and a Stokes image product called the degree-of-linear-polarization image.
a Novel Deep Convolutional Neural Network for Spectral-Spatial Classification of Hyperspectral Data
NASA Astrophysics Data System (ADS)
Li, N.; Wang, C.; Zhao, H.; Gong, X.; Wang, D.
2018-04-01
Spatial and spectral information are obtained simultaneously by hyperspectral remote sensing. Joint extraction of these information of hyperspectral image is one of most import methods for hyperspectral image classification. In this paper, a novel deep convolutional neural network (CNN) is proposed, which extracts spectral-spatial information of hyperspectral images correctly. The proposed model not only learns sufficient knowledge from the limited number of samples, but also has powerful generalization ability. The proposed framework based on three-dimensional convolution can extract spectral-spatial features of labeled samples effectively. Though CNN has shown its robustness to distortion, it cannot extract features of different scales through the traditional pooling layer that only have one size of pooling window. Hence, spatial pyramid pooling (SPP) is introduced into three-dimensional local convolutional filters for hyperspectral classification. Experimental results with a widely used hyperspectral remote sensing dataset show that the proposed model provides competitive performance.
Obtaining environmental measures to facilitate vertebrate habitat modeling
Karl, J.W.; Wright, N.M.; Heglund, P.J.; Scott, J.M.
1999-01-01
Published literature generally lacks habitat information needed to adequately model the habitats of most wildlife species at large scales (>1:100,000). We searched in primary and secondary literature for occurrence of several potentially useful habitat measures for 20 species of interest to the Idaho Department of Fish and Game. We found adequate information for modeling only the habitats of certain game species and species of special interest. We suggest that many more researchers could collect simple habitat information regarding vegetation composition and structure, topographic features, soils, temperature, and distance to special landscape features such that current research expenses would not be increased significantly. We recommend that habitat data be consistently reported in peer-reviewed literature or deposited into a central data repository. This will not only help fill the gaps in our current knowledge of wildlife but also place it in a format that is readily accessible by the scientific community.
NASA Technical Reports Server (NTRS)
Clark, Roger N.; Swayze, Gregg A.; Gallagher, Andrea
1992-01-01
The sedimentary sections exposed in the Canyonlands and Arches National Parks region of Utah (generally referred to as 'Canyonlands') consist of sandstones, shales, limestones, and conglomerates. Reflectance spectra of weathered surfaces of rocks from these areas show two components: (1) variations in spectrally detectable mineralogy, and (2) variations in the relative ratios of the absorption bands between minerals. Both types of information can be used together to map each major lithology and the Clark spectral features mapping algorithm is applied to do the job.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mendez Cruz, Carmen Margarita; Rochau, Gary E.; Middleton, Bobby
Sandia National Laboratories and General Atomics are pleased to respond to the Advanced Research Projects Agency-Energy (ARPA-e)’s request for information on innovative developments that may overcome various current reactor-technology limitations. The RFI is particularly interested in innovations that enable ultra-safe and secure modular nuclear energy systems. Our response addresses the specific features for reactor designs called out in the RFI, including a brief assessment of the current state of the technologies that would enable each feature and the methods by which they could be best incorporated into a reactor design.
Predicting Protein-Protein Interactions by Combing Various Sequence-Derived.
Zhao, Xiao-Wei; Ma, Zhi-Qiang; Yin, Ming-Hao
2011-09-20
Knowledge of protein-protein interactions (PPIs) plays an important role in constructing protein interaction networks and understanding the general machineries of biological systems. In this study, a new method is proposed to predict PPIs using a comprehensive set of 930 features based only on sequence information, these features measure the interactions between residues a certain distant apart in the protein sequences from different aspects. To achieve better performance, the principal component analysis (PCA) is first employed to obtain an optimized feature subset. Then, the resulting 67-dimensional feature vectors are fed to Support Vector Machine (SVM). Experimental results on Drosophila melanogaster and Helicobater pylori datasets show that our method is very promising to predict PPIs and may at least be a useful supplement tool to existing methods.
Chen, Jia-Mei; Li, Yan; Xu, Jun; Gong, Lei; Wang, Lin-Wei; Liu, Wen-Lou; Liu, Juan
2017-03-01
With the advance of digital pathology, image analysis has begun to show its advantages in information analysis of hematoxylin and eosin histopathology images. Generally, histological features in hematoxylin and eosin images are measured to evaluate tumor grade and prognosis for breast cancer. This review summarized recent works in image analysis of hematoxylin and eosin histopathology images for breast cancer prognosis. First, prognostic factors for breast cancer based on hematoxylin and eosin histopathology images were summarized. Then, usual procedures of image analysis for breast cancer prognosis were systematically reviewed, including image acquisition, image preprocessing, image detection and segmentation, and feature extraction. Finally, the prognostic value of image features and image feature-based prognostic models was evaluated. Moreover, we discussed the issues of current analysis, and some directions for future research.
Spiders: water-driven erosive structures in the southern hemisphere of Mars.
Prieto-Ballesteros, Olga; Fernández-Remolar, David C; Rodríguez-Manfredi, José Antonio; Selsis, Franck; Manrubia, Susanna C
2006-08-01
Recent data from space missions reveal that there are ongoing climatic changes and erosive processes that continuously modify surface features of Mars. We have investigated the seasonal dynamics of a number of morphological features located at Inca City, a representative area at high southern latitude that has undergone seasonal processes. By integrating visual information from the Mars Orbiter Camera on board the Mars Global Surveyor and climatic cycles from a Mars' General Circulation Model, and considering the recently reported evidence for the presence of water-ice and aqueous precipitates on Mars, we propose that a number of the erosive features identified in Inca City, among them spiders, result from the seasonal melting of aqueous salty solutions.
GED Items. The Newsletter of the GED Testing Service, 1998.
ERIC Educational Resources Information Center
American Council on Education, Washington, DC. General Educational Development Testing Service.
This document consists of the five issues of the newsletter of the General Educational Development (GED) Testing Service: January/February, March/April, May/June, September/October, and November/December. Each issue contains information of interest to users of the GED examinations. The feature article for the January/February issue is "Next…
ERIC Educational Resources Information Center
Cole, Clair R.; Smith, Christopher A.
1990-01-01
Information about the biosynthesis of the carbohydrate portions or glycans of glycoproteins is presented. The teaching of glycosylation can be used to develop and emphasize many general aspects of biosynthesis, in addition to explaining specific biochemical and molecular biological features associated with producing the oligosaccharide portions of…
Evaluation of Agricultural Accounting Software. Improved Decision Making. Third Edition.
ERIC Educational Resources Information Center
Lovell, Ashley C., Comp.
Following a discussion of the evaluation criteria for choosing accounting software, this guide contains reviews of 27 accounting software programs that could be used by farm or ranch business managers. The information in the reviews was provided by the software vendors and covers the following points for each software package: general features,…
Integration of SAR and DEM data: Geometrical considerations
NASA Technical Reports Server (NTRS)
Kropatsch, Walter G.
1991-01-01
General principles for integrating data from different sources are derived from the experience of registration of SAR images with digital elevation models (DEM) data. The integration consists of establishing geometrical relations between the data sets that allow us to accumulate information from both data sets for any given object point (e.g., elevation, slope, backscatter of ground cover, etc.). Since the geometries of the two data are completely different they cannot be compared on a pixel by pixel basis. The presented approach detects instances of higher level features in both data sets independently and performs the matching at the high level. Besides the efficiency of this general strategy it further allows the integration of additional knowledge sources: world knowledge and sensor characteristics are also useful sources of information. The SAR features layover and shadow can be detected easily in SAR images. An analytical method to find such regions also in a DEM needs in addition the parameters of the flight path of the SAR sensor and the range projection model. The generation of the SAR layover and shadow maps is summarized and new extensions to this method are proposed.
Kim, Ben Yb; Sharafoddini, Anis; Tran, Nam; Wen, Emily Y; Lee, Joon
2018-03-28
General consumers can now easily access drug information and quickly check for potential drug-drug interactions (PDDIs) through mobile health (mHealth) apps. With aging population in Canada, more people have chronic diseases and comorbidities leading to increasing numbers of medications. The use of mHealth apps for checking PDDIs can be helpful in ensuring patient safety and empowerment. The aim of this study was to review the characteristics and quality of publicly available mHealth apps that check for PDDIs. Apple App Store and Google Play were searched to identify apps with PDDI functionality. The apps' general and feature characteristics were extracted. The Mobile App Rating Scale (MARS) was used to assess the quality. A total of 23 apps were included for the review-12 from Apple App Store and 11 from Google Play. Only 5 of these were paid apps, with an average price of $7.19 CAD. The mean MARS score was 3.23 out of 5 (interquartile range 1.34). The mean MARS scores for the apps from Google Play and Apple App Store were not statistically different (P=.84). The information dimension was associated with the highest score (3.63), whereas the engagement dimension resulted in the lowest score (2.75). The total number of features per app, average rating, and price were significantly associated with the total MARS score. Some apps provided accurate and comprehensive information about potential adverse drug effects from PDDIs. Given the potentially severe consequences of incorrect drug information, there is a need for oversight to eliminate low quality and potentially harmful apps. Because managing PDDIs is complex in the absence of complete information, secondary features such as medication reminder, refill reminder, medication history tracking, and pill identification could help enhance the effectiveness of PDDI apps. ©Ben YB Kim, Anis Sharafoddini, Nam Tran, Emily Y Wen, Joon Lee. Originally published in JMIR Mhealth and Uhealth (http://mhealth.jmir.org), 28.03.2018.
Kim, Ben YB; Sharafoddini, Anis; Tran, Nam; Wen, Emily Y
2018-01-01
Background General consumers can now easily access drug information and quickly check for potential drug-drug interactions (PDDIs) through mobile health (mHealth) apps. With aging population in Canada, more people have chronic diseases and comorbidities leading to increasing numbers of medications. The use of mHealth apps for checking PDDIs can be helpful in ensuring patient safety and empowerment. Objective The aim of this study was to review the characteristics and quality of publicly available mHealth apps that check for PDDIs. Methods Apple App Store and Google Play were searched to identify apps with PDDI functionality. The apps’ general and feature characteristics were extracted. The Mobile App Rating Scale (MARS) was used to assess the quality. Results A total of 23 apps were included for the review—12 from Apple App Store and 11 from Google Play. Only 5 of these were paid apps, with an average price of $7.19 CAD. The mean MARS score was 3.23 out of 5 (interquartile range 1.34). The mean MARS scores for the apps from Google Play and Apple App Store were not statistically different (P=.84). The information dimension was associated with the highest score (3.63), whereas the engagement dimension resulted in the lowest score (2.75). The total number of features per app, average rating, and price were significantly associated with the total MARS score. Conclusions Some apps provided accurate and comprehensive information about potential adverse drug effects from PDDIs. Given the potentially severe consequences of incorrect drug information, there is a need for oversight to eliminate low quality and potentially harmful apps. Because managing PDDIs is complex in the absence of complete information, secondary features such as medication reminder, refill reminder, medication history tracking, and pill identification could help enhance the effectiveness of PDDI apps. PMID:29592848
Botsis, T; Woo, E J; Ball, R
2013-01-01
We previously demonstrated that a general purpose text mining system, the Vaccine adverse event Text Mining (VaeTM) system, could be used to automatically classify reports of an-aphylaxis for post-marketing safety surveillance of vaccines. To evaluate the ability of VaeTM to classify reports to the Vaccine Adverse Event Reporting System (VAERS) of possible Guillain-Barré Syndrome (GBS). We used VaeTM to extract the key diagnostic features from the text of reports in VAERS. Then, we applied the Brighton Collaboration (BC) case definition for GBS, and an information retrieval strategy (i.e. the vector space model) to quantify the specific information that is included in the key features extracted by VaeTM and compared it with the encoded information that is already stored in VAERS as Medical Dictionary for Regulatory Activities (MedDRA) Preferred Terms (PTs). We also evaluated the contribution of the primary (diagnosis and cause of death) and secondary (second level diagnosis and symptoms) diagnostic VaeTM-based features to the total VaeTM-based information. MedDRA captured more information and better supported the classification of reports for GBS than VaeTM (AUC: 0.904 vs. 0.777); the lower performance of VaeTM is likely due to the lack of extraction by VaeTM of specific laboratory results that are included in the BC criteria for GBS. On the other hand, the VaeTM-based classification exhibited greater specificity than the MedDRA-based approach (94.96% vs. 87.65%). Most of the VaeTM-based information was contained in the secondary diagnostic features. For GBS, clinical signs and symptoms alone are not sufficient to match MedDRA coding for purposes of case classification, but are preferred if specificity is the priority.
Generalized Uncertainty Principle and Parikh-Wilczek Tunneling
NASA Astrophysics Data System (ADS)
Mehdipour, S. Hamid
We investigate the modifications of the Hawking radiation by the Generalized Uncertainty Principle (GUP) and the tunneling process. By using the GUP-corrected de Broglie wavelength, the squeezing of the fundamental momentum cell, and consequently a GUP-corrected energy, we find the nonthermal effects which lead to a nonzero statistical correlation function between probabilities of tunneling of two massive particles with different energies. Then the recovery of part of the information from the black hole radiation is feasible. From the other point of view, the inclusion of the effects of quantum gravity as the GUP expression can halt the evaporation process, so that a stable black hole remnant is left behind, including the other part of the black hole information content. Therefore, these features of the Planck-scale corrections may solve the information problem in black hole evaporation.
Citation Sentiment Analysis in Clinical Trial Papers
Xu, Jun; Zhang, Yaoyun; Wu, Yonghui; Wang, Jingqi; Dong, Xiao; Xu, Hua
2015-01-01
In scientific writing, positive credits and negative criticisms can often be seen in the text mentioning the cited papers, providing useful information about whether a study can be reproduced or not. In this study, we focus on citation sentiment analysis, which aims to determine the sentiment polarity that the citation context carries towards the cited paper. A citation sentiment corpus was annotated first on clinical trial papers. The effectiveness of n-gram and sentiment lexicon features, and problem-specified structure features for citation sentiment analysis were then examined using the annotated corpus. The combined features from the word n-grams, the sentiment lexicons and the structure information achieved the highest Micro F-score of 0.860 and Macro-F score of 0.719, indicating that it is feasible to use machine learning methods for citation sentiment analysis in biomedical publications. A comprehensive comparison between citation sentiment analysis of clinical trial papers and other general domains were conducted, which additionally highlights the unique challenges within this domain. PMID:26958274
NASA Technical Reports Server (NTRS)
Junkin, B. G.
1980-01-01
A generalized three dimensional perspective software capability was developed within the framework of a low cost computer oriented geographically based information system using the Earth Resources Laboratory Applications Software (ELAS) operating subsystem. This perspective software capability, developed primarily to support data display requirements at the NASA/NSTL Earth Resources Laboratory, provides a means of displaying three dimensional feature space object data in two dimensional picture plane coordinates and makes it possible to overlay different types of information on perspective drawings to better understand the relationship of physical features. An example topographic data base is constructed and is used as the basic input to the plotting module. Examples are shown which illustrate oblique viewing angles that convey spatial concepts and relationships represented by the topographic data planes.
Rezlescu, Constantin; Duchaine, Brad; Olivola, Christopher Y; Chater, Nick
2012-01-01
Many human interactions are built on trust, so widespread confidence in first impressions generally favors individuals with trustworthy-looking appearances. However, few studies have explicitly examined: 1) the contribution of unfakeable facial features to trust-based decisions, and 2) how these cues are integrated with information about past behavior. Using highly controlled stimuli and an improved experimental procedure, we show that unfakeable facial features associated with the appearance of trustworthiness attract higher investments in trust games. The facial trustworthiness premium is large for decisions based solely on faces, with trustworthy identities attracting 42% more money (Study 1), and remains significant though reduced to 6% when reputational information is also available (Study 2). The face trustworthiness premium persists with real (rather than virtual) currency and when higher payoffs are at stake (Study 3). Our results demonstrate that cooperation may be affected not only by controllable appearance cues (e.g., clothing, facial expressions) as shown previously, but also by features that are impossible to mimic (e.g., individual facial structure). This unfakeable face trustworthiness effect is not limited to the rare situations where people lack any information about their partners, but survives in richer environments where relevant details about partner past behavior are available.
Rezlescu, Constantin; Duchaine, Brad; Olivola, Christopher Y.; Chater, Nick
2012-01-01
Background Many human interactions are built on trust, so widespread confidence in first impressions generally favors individuals with trustworthy-looking appearances. However, few studies have explicitly examined: 1) the contribution of unfakeable facial features to trust-based decisions, and 2) how these cues are integrated with information about past behavior. Methodology/Principal Findings Using highly controlled stimuli and an improved experimental procedure, we show that unfakeable facial features associated with the appearance of trustworthiness attract higher investments in trust games. The facial trustworthiness premium is large for decisions based solely on faces, with trustworthy identities attracting 42% more money (Study 1), and remains significant though reduced to 6% when reputational information is also available (Study 2). The face trustworthiness premium persists with real (rather than virtual) currency and when higher payoffs are at stake (Study 3). Conclusions/Significance Our results demonstrate that cooperation may be affected not only by controllable appearance cues (e.g., clothing, facial expressions) as shown previously, but also by features that are impossible to mimic (e.g., individual facial structure). This unfakeable face trustworthiness effect is not limited to the rare situations where people lack any information about their partners, but survives in richer environments where relevant details about partner past behavior are available. PMID:22470553
Fast multi-scale feature fusion for ECG heartbeat classification
NASA Astrophysics Data System (ADS)
Ai, Danni; Yang, Jian; Wang, Zeyu; Fan, Jingfan; Ai, Changbin; Wang, Yongtian
2015-12-01
Electrocardiogram (ECG) is conducted to monitor the electrical activity of the heart by presenting small amplitude and duration signals; as a result, hidden information present in ECG data is difficult to determine. However, this concealed information can be used to detect abnormalities. In our study, a fast feature-fusion method of ECG heartbeat classification based on multi-linear subspace learning is proposed. The method consists of four stages. First, baseline and high frequencies are removed to segment heartbeat. Second, as an extension of wavelets, wavelet-packet decomposition is conducted to extract features. With wavelet-packet decomposition, good time and frequency resolutions can be provided simultaneously. Third, decomposed confidences are arranged as a two-way tensor, in which feature fusion is directly implemented with generalized N dimensional ICA (GND-ICA). In this method, co-relationship among different data information is considered, and disadvantages of dimensionality are prevented; this method can also be used to reduce computing compared with linear subspace-learning methods (PCA). Finally, support vector machine (SVM) is considered as a classifier in heartbeat classification. In this study, ECG records are obtained from the MIT-BIT arrhythmia database. Four main heartbeat classes are used to examine the proposed algorithm. Based on the results of five measurements, sensitivity, positive predictivity, accuracy, average accuracy, and t-test, our conclusion is that a GND-ICA-based strategy can be used to provide enhanced ECG heartbeat classification. Furthermore, large redundant features are eliminated, and classification time is reduced.
NASA Astrophysics Data System (ADS)
Takayama, Masaya
This author reports the discussion and resolutions from the annual meeting of the Special Libraries Association at New York City in the U.S. and the General Conference of the International Federation of Library Associations and Institutions at Paris in France in 1989. Moreover this author visited American libraries after the S.L.A.meeting, and French and British libraries after the IFLA Conference. Based on these library tours, this author concludes that Japanese librarians and information specialists need to know the needs of the application of new information network technologies to the routine works of information management in Japan.
Time-Elastic Generative Model for Acceleration Time Series in Human Activity Recognition
Munoz-Organero, Mario; Ruiz-Blazquez, Ramona
2017-01-01
Body-worn sensors in general and accelerometers in particular have been widely used in order to detect human movements and activities. The execution of each type of movement by each particular individual generates sequences of time series of sensed data from which specific movement related patterns can be assessed. Several machine learning algorithms have been used over windowed segments of sensed data in order to detect such patterns in activity recognition based on intermediate features (either hand-crafted or automatically learned from data). The underlying assumption is that the computed features will capture statistical differences that can properly classify different movements and activities after a training phase based on sensed data. In order to achieve high accuracy and recall rates (and guarantee the generalization of the system to new users), the training data have to contain enough information to characterize all possible ways of executing the activity or movement to be detected. This could imply large amounts of data and a complex and time-consuming training phase, which has been shown to be even more relevant when automatically learning the optimal features to be used. In this paper, we present a novel generative model that is able to generate sequences of time series for characterizing a particular movement based on the time elasticity properties of the sensed data. The model is used to train a stack of auto-encoders in order to learn the particular features able to detect human movements. The results of movement detection using a newly generated database with information on five users performing six different movements are presented. The generalization of results using an existing database is also presented in the paper. The results show that the proposed mechanism is able to obtain acceptable recognition rates (F = 0.77) even in the case of using different people executing a different sequence of movements and using different hardware. PMID:28208736
Time-Elastic Generative Model for Acceleration Time Series in Human Activity Recognition.
Munoz-Organero, Mario; Ruiz-Blazquez, Ramona
2017-02-08
Body-worn sensors in general and accelerometers in particular have been widely used in order to detect human movements and activities. The execution of each type of movement by each particular individual generates sequences of time series of sensed data from which specific movement related patterns can be assessed. Several machine learning algorithms have been used over windowed segments of sensed data in order to detect such patterns in activity recognition based on intermediate features (either hand-crafted or automatically learned from data). The underlying assumption is that the computed features will capture statistical differences that can properly classify different movements and activities after a training phase based on sensed data. In order to achieve high accuracy and recall rates (and guarantee the generalization of the system to new users), the training data have to contain enough information to characterize all possible ways of executing the activity or movement to be detected. This could imply large amounts of data and a complex and time-consuming training phase, which has been shown to be even more relevant when automatically learning the optimal features to be used. In this paper, we present a novel generative model that is able to generate sequences of time series for characterizing a particular movement based on the time elasticity properties of the sensed data. The model is used to train a stack of auto-encoders in order to learn the particular features able to detect human movements. The results of movement detection using a newly generated database with information on five users performing six different movements are presented. The generalization of results using an existing database is also presented in the paper. The results show that the proposed mechanism is able to obtain acceptable recognition rates ( F = 0.77) even in the case of using different people executing a different sequence of movements and using different hardware.
Clinical Computing in General Dentistry
Schleyer, Titus K.L.; Thyvalikakath, Thankam P.; Spallek, Heiko; Torres-Urquidy, Miguel H.; Hernandez, Pedro; Yuhaniak, Jeannie
2006-01-01
Objective: Measure the adoption and utilization of, opinions about, and attitudes toward clinical computing among general dentists in the United States. Design: Telephone survey of a random sample of 256 general dentists in active practice in the United States. Measurements: A 39-item telephone interview measuring practice characteristics and information technology infrastructure; clinical information storage; data entry and access; attitudes toward and opinions about clinical computing (features of practice management systems, barriers, advantages, disadvantages, and potential improvements); clinical Internet use; and attitudes toward the National Health Information Infrastructure. Results: The authors successfully screened 1,039 of 1,159 randomly sampled U.S. general dentists in active practice (89.6% response rate). Two hundred fifty-six (24.6%) respondents had computers at chairside and thus were eligible for this study. The authors successfully interviewed 102 respondents (39.8%). Clinical information associated with administration and billing, such as appointments and treatment plans, was stored predominantly on the computer; other information, such as the medical history and progress notes, primarily resided on paper. Nineteen respondents, or 1.8% of all general dentists, were completely paperless. Auxiliary personnel, such as dental assistants and hygienists, entered most data. Respondents adopted clinical computing to improve office efficiency and operations, support diagnosis and treatment, and enhance patient communication and perception. Barriers included insufficient operational reliability, program limitations, a steep learning curve, cost, and infection control issues. Conclusion: Clinical computing is being increasingly adopted in general dentistry. However, future research must address usefulness and ease of use, workflow support, infection control, integration, and implementation issues. PMID:16501177
Protecting genomic sequence anonymity with generalization lattices.
Malin, B A
2005-01-01
Current genomic privacy technologies assume the identity of genomic sequence data is protected if personal information, such as demographics, are obscured, removed, or encrypted. While demographic features can directly compromise an individual's identity, recent research demonstrates such protections are insufficient because sequence data itself is susceptible to re-identification. To counteract this problem, we introduce an algorithm for anonymizing a collection of person-specific DNA sequences. The technique is termed DNA lattice anonymization (DNALA), and is based upon the formal privacy protection schema of k -anonymity. Under this model, it is impossible to observe or learn features that distinguish one genetic sequence from k-1 other entries in a collection. To maximize information retained in protected sequences, we incorporate a concept generalization lattice to learn the distance between two residues in a single nucleotide region. The lattice provides the most similar generalized concept for two residues (e.g. adenine and guanine are both purines). The method is tested and evaluated with several publicly available human population datasets ranging in size from 30 to 400 sequences. Our findings imply the anonymization schema is feasible for the protection of sequences privacy. The DNALA method is the first computational disclosure control technique for general DNA sequences. Given the computational nature of the method, guarantees of anonymity can be formally proven. There is room for improvement and validation, though this research provides the groundwork from which future researchers can construct genomics anonymization schemas tailored to specific datasharing scenarios.
Hovenga, Evelyn J S; Grain, Heather
2013-01-01
Health information provides the foundation for all decision making in healthcare whether clinical at the bed side, or at a national government level. This information is generally collected as part of systems which support administrative or clinical workflow and practice. This chapter describes the many and varied features of systems such as electronic health records (EHRs), how they fit with health information systems and how they collectively manage information flow. Systems engineering methods and tools are described together with their use to suit the health industry. This focuses on the need for suitable system architectures and semantic interoperability. These concepts and their relevance to the health industry are explained. The relationship and requirements for appropriate data governance in these systems is also considered.
Grant, Lenny; Hausman, Bernice L; Cashion, Margaret; Lucchesi, Nicholas; Patel, Kelsey; Roberts, Jonathan
2015-05-29
Current concerns about vaccination resistance often cite the Internet as a source of vaccine controversy. Most academic studies of vaccine resistance online use quantitative methods to describe misinformation on vaccine-skeptical websites. Findings from these studies are useful for categorizing the generic features of these websites, but they do not provide insights into why these websites successfully persuade their viewers. To date, there have been few attempts to understand, qualitatively, the persuasive features of provaccine or vaccine-skeptical websites. The purpose of this research was to examine the persuasive features of provaccine and vaccine-skeptical websites. The qualitative analysis was conducted to generate hypotheses concerning what features of these websites are persuasive to people seeking information about vaccination and vaccine-related practices. This study employed a fully qualitative case study methodology that used the anthropological method of thick description to detail and carefully review the rhetorical features of 1 provaccine government website, 1 provaccine hospital website, 1 vaccine-skeptical information website focused on general vaccine safety, and 1 vaccine-skeptical website focused on a specific vaccine. The data gathered were organized into 5 domains: website ownership, visual and textual content, user experience, hyperlinking, and social interactivity. The study found that the 2 provaccine websites analyzed functioned as encyclopedias of vaccine information. Both of the websites had relatively small digital ecologies because they only linked to government websites or websites that endorsed vaccination and evidence-based medicine. Neither of these websites offered visitors interactive features or made extensive use of the affordances of Web 2.0. The study also found that the 2 vaccine-skeptical websites had larger digital ecologies because they linked to a variety of vaccine-related websites, including government websites. They leveraged the affordances of Web 2.0 with their interactive features and digital media. By employing a rhetorical framework, this study found that the provaccine websites analyzed concentrate on the accurate transmission of evidence-based scientific research about vaccines and government-endorsed vaccination-related practices, whereas the vaccine-skeptical websites focus on creating communities of people affected by vaccines and vaccine-related practices. From this personal framework, these websites then challenge the information presented in scientific literature and government documents. At the same time, the vaccine-skeptical websites in this study are repositories of vaccine information and vaccination-related resources. Future studies on vaccination and the Internet should take into consideration the rhetorical features of provaccine and vaccine-skeptical websites and further investigate the influence of Web 2.0 community-building features on people seeking information about vaccine-related practices.
NASA Technical Reports Server (NTRS)
Chamberlain, Jim; Latorella, Kara
2003-01-01
This viewgraph presentation provides information on an airborne experiment designed to test the decision making of pilots receiving different sources of meteorological data. The presentation covers the equipment used in the COnvective Weather Sources (CoWS) Experiment), including the information system and display devices available to some of the subjects. It also describes the experiment, which featured teams of general aviation pilots, who were onboard but did not actually fly the aircraft used in the experiment. The presentation includes the results of a survey of the subjects' confidence.
Tools reference manual for a Requirements Specification Language (RSL), version 2.0
NASA Technical Reports Server (NTRS)
Fisher, Gene L.; Cohen, Gerald C.
1993-01-01
This report describes a general-purpose Requirements Specification Language, RSL. The purpose of RSL is to specify precisely the external structure of a mechanized system and to define requirements that the system must meet. A system can be comprised of a mixture of hardware, software, and human processing elements. RSL is a hybrid of features found in several popular requirements specification languages, such as SADT (Structured Analysis and Design Technique), PSL (Problem Statement Language), and RMF (Requirements Modeling Framework). While languages such as these have useful features for structuring a specification, they generally lack formality. To overcome the deficiencies of informal requirements languages, RSL has constructs for formal mathematical specification. These constructs are similar to those found in formal specification languages such as EHDM (Enhanced Hierarchical Development Methodology), Larch, and OBJ3.
Reddy, James E.; Kappel, William M.
2010-01-01
Existing hydrogeologic and geospatial data useful for the assessment of focused recharge to the carbonate-rock aquifer in the central part of Genesee County, NY, were compiled from numerous local, State, and Federal agency sources. Data sources utilized in this pilot study include available geospatial datasets from Federal and State agencies, interviews with local highway departments and the Genesee County Soil and Water Conservation District, and an initial assessment of karst features through the analysis of ortho-photographs, with minimal field verification. The compiled information is presented in a series of county-wide and quadrangle maps. The county-wide maps present generalized hydrogeologic conditions including distribution of geologic units, major faults, and karst features, and bedrock-surface and water-table configurations. Ten sets of quadrangle maps of the area that overlies the carbonate-rock aquifer present more detailed and additional information including distribution of bedrock outcrops, thin and (or) permeable soils, and karst features such as sinkholes and swallets. Water-resource managers can utilize the information summarized in this report as a guide to their assessment of focused recharge to, and the potential for surface contaminants to reach the carbonate-rock aquifer.
Enabling OpenID Authentication for VO-integrated Portals
NASA Astrophysics Data System (ADS)
Plante, R.; Yekkirala, V.; Baker, W.
2012-09-01
To support interoperating services that share proprietary data and other user-specific information, the VAO Project provides login services for browser-based portals built on the open standard, OpenID. To help portal developers take advantage of this service, we have developed a downloadable toolkit for integrating OpenID single sign-on support into any portal. This toolkit provides APIs in a few languages commonly used on the server-side as well as a command-line version for use in any language. In addition to describing how to use this toolkit, we also discuss the general VAO framework for single sign-on. While a portal may, if it wishes, support any OpenID provider, the VAO service provides a few extra features to support VO interoperability. This includes a portal's ability to retrieve (with the user's permission) an X.509 certificate representing the authenticated user so that the portal can access other restricted services on the user's behalf. Other standard features of OpenID allow portals to request other information about the user; this feature will be used in the future for sharing information about a user's group membership to enable sharing within a group of collaborating scientists.
Portraying Real Science in Science Communication
ERIC Educational Resources Information Center
van Dijk, Esther M.
2011-01-01
In both formal and informal settings, not only science but also views on the nature of science are communicated. Although there probably is no singular nature shared by all fields of science, in the field of science education it is commonly assumed that on a certain level of generality there is a consensus on many features of science. In this…
A Comparison of Keyboarding Software for the Elementary Grades. A Quarterly Report.
ERIC Educational Resources Information Center
Nolf, Kathleen; Weaver, Dave
This paper provides generalizations and ideas on what to look for when previewing software products designed for teaching or improving the keyboarding skills of elementary school students, a list of nine products that the MicroSIFT (Microcomputer Software and Information for Teachers) staff recommends for preview, and a table of features comparing…
Federal Register 2010, 2011, 2012, 2013, 2014
2013-01-24
..., except federal holidays. FOR FURTHER INFORMATION CONTACT: Joe Jacobsen, FAA, Airplane and Flight Crew... protection features include limitations on angle-of- attack, normal load factor, bank angle, pitch angle, and... characteristics, and High angle-of-attack. Section Sec. 25.143, however, does not adequately ensure that the novel...
The Effects of Cancer and Cancer Treatment: What Teachers Should Know.
ERIC Educational Resources Information Center
Rich, Marc D.
High school biology textbooks feature little coverage of cancer, so that college students are not generally informed about the condition. At the same time, there has been a dramatic increase in the number of young people who survive cancer, which means that college instructors are likely to have students who have or have had cancer. Instructors…
A System for Drawing Synthetic Images of Forested Landscapes
Timothy P. McDonald
1997-01-01
A software package for drawing images of forested landscapes was developed. Programs included in the system convert topographic and stand polygon information output from a GIS into a form that can be read by a general-purpose ray-tracing renderer. Other programs generate definitions for surface features, mainly trees but ground surface textural properties as well. The...
ERIC Educational Resources Information Center
Sung, Y.-T.; Hou, H.-T.; Liu, C.-K.; Chang, K.-E.
2010-01-01
Mobile devices have been increasingly utilized in informal learning because of their high degree of portability; mobile guide systems (or electronic guidebooks) have also been adopted in museum learning, including those that combine learning strategies and the general audio-visual guide systems. To gain a deeper understanding of the features and…
Kinematic parameters of signed verbs.
Malaia, Evie; Wilbur, Ronnie B; Milkovic, Marina
2013-10-01
Sign language users recruit physical properties of visual motion to convey linguistic information. Research on American Sign Language (ASL) indicates that signers systematically use kinematic features (e.g., velocity, deceleration) of dominant hand motion for distinguishing specific semantic properties of verb classes in production ( Malaia & Wilbur, 2012a) and process these distinctions as part of the phonological structure of these verb classes in comprehension ( Malaia, Ranaweera, Wilbur, & Talavage, 2012). These studies are driven by the event visibility hypothesis by Wilbur (2003), who proposed that such use of kinematic features should be universal to sign language (SL) by the grammaticalization of physics and geometry for linguistic purposes. In a prior motion capture study, Malaia and Wilbur (2012a) lent support for the event visibility hypothesis in ASL, but there has not been quantitative data from other SLs to test the generalization to other languages. The authors investigated the kinematic parameters of predicates in Croatian Sign Language ( Hrvatskom Znakovnom Jeziku [HZJ]). Kinematic features of verb signs were affected both by event structure of the predicate (semantics) and phrase position within the sentence (prosody). The data demonstrate that kinematic features of motion in HZJ verb signs are recruited to convey morphological and prosodic information. This is the first crosslinguistic motion capture confirmation that specific kinematic properties of articulator motion are grammaticalized in other SLs to express linguistic features.
Zhang, Jian; Gao, Bo; Chai, Haiting; Ma, Zhiqiang; Yang, Guifu
2016-08-26
DNA-binding proteins (DBPs) play fundamental roles in many biological processes. Therefore, the developing of effective computational tools for identifying DBPs is becoming highly desirable. In this study, we proposed an accurate method for the prediction of DBPs. Firstly, we focused on the challenge of improving DBP prediction accuracy with information solely from the sequence. Secondly, we used multiple informative features to encode the protein. These features included evolutionary conservation profile, secondary structure motifs, and physicochemical properties. Thirdly, we introduced a novel improved Binary Firefly Algorithm (BFA) to remove redundant or noisy features as well as select optimal parameters for the classifier. The experimental results of our predictor on two benchmark datasets outperformed many state-of-the-art predictors, which revealed the effectiveness of our method. The promising prediction performance on a new-compiled independent testing dataset from PDB and a large-scale dataset from UniProt proved the good generalization ability of our method. In addition, the BFA forged in this research would be of great potential in practical applications in optimization fields, especially in feature selection problems. A highly accurate method was proposed for the identification of DBPs. A user-friendly web-server named iDbP (identification of DNA-binding Proteins) was constructed and provided for academic use.
Content and Functionality of Alcohol and Other Drug Websites: Results of an Online Survey
White, Angela; Kavanagh, David; Shandley, Kerrie; Kay-Lambkin, Frances; Proudfoot, Judith; Drennan, Judy; Connor, Jason; Baker, Amanda; Young, Ross
2010-01-01
Background There is a growing trend for individuals to seek health information from online sources. Alcohol and other drug (AOD) use is a significant health problem worldwide, but access and use of AOD websites is poorly understood. Objective To investigate content and functionality preferences for AOD and other health websites. Methods An anonymous online survey examined general Internet and AOD-specific usage and search behaviors, valued features of AOD and health-related websites (general and interactive website features), indicators of website trustworthiness, valued AOD website tools or functions, and treatment modality preferences. Results Surveys were obtained from 1214 drug (n = 766) and alcohol website users (n = 448) (mean age 26.2 years, range 16-70). There were no significant differences between alcohol and drug groups on demographic variables, Internet usage, indicators of website trustworthiness, or on preferences for AOD website functionality. A robust website design/navigation, open access, and validated content provision were highly valued by both groups. While attractiveness and pictures or graphics were also valued, high-cost features (videos, animations, games) were minority preferences. Almost half of respondents in both groups were unable to readily access the information they sought. Alcohol website users placed greater importance on several AOD website tools and functions than did those accessing other drug websites: online screening tools (χ²2 = 15.8, P < .001, n = 985); prevention programs (χ²2 = 27.5, P < .001, n = 981); tracking functions (χ²2 = 11.5, P = .003, n = 983); self help treatment programs (χ²2 = 8.3, P = .02, n = 984); downloadable fact sheets for friends (χ²2 = 11.6, P = .003, n = 981); or family (χ²2 = 12.7, P = .002, n = 983). The most preferred online treatment option for both the user groups was an Internet site with email therapist support. Explorations of demographic differences were also performed. While gender did not affect survey responses, younger respondents were more likely to value interactive and social networking features, whereas downloading of credible information was most highly valued by older respondents. Conclusions Significant deficiencies in the provision of accessible information on AOD websites were identified, an important problem since information seeking was the most common reason for accessing these websites, and, therefore, may be a key avenue for engaging website users in behaviour change. The few differences between AOD website users suggested that both types of websites may have similar features, although alcohol website users may more readily be engaged in screening, prevention and self-help programs, tracking change, and may value fact sheets more highly. While the sociodemographic differences require replication and clarification, these differences support the notion that the design and features of AOD websites should target specific audiences to have maximal impact. PMID:21169168
NASA Astrophysics Data System (ADS)
Oppikofer, Thierry; Nordahl, Bobo; Bunkholt, Halvor; Nicolaisen, Magnus; Jarna, Alexandra; Iversen, Sverre; Hermanns, Reginald L.; Böhme, Martina; Yugsi Molina, Freddy X.
2015-11-01
The unstable rock slope database is developed and maintained by the Geological Survey of Norway as part of the systematic mapping of unstable rock slopes in Norway. This mapping aims to detect catastrophic rock slope failures before they occur. More than 250 unstable slopes with post-glacial deformation are detected up to now. The main aims of the unstable rock slope database are (1) to serve as a national archive for unstable rock slopes in Norway; (2) to serve for data collection and storage during field mapping; (3) to provide decision-makers with hazard zones and other necessary information on unstable rock slopes for land-use planning and mitigation; and (4) to inform the public through an online map service. The database is organized hierarchically with a main point for each unstable rock slope to which several feature classes and tables are linked. This main point feature class includes several general attributes of the unstable rock slopes, such as site name, general and geological descriptions, executed works, recommendations, technical parameters (volume, lithology, mechanism and others), displacement rates, possible consequences, as well as hazard and risk classification. Feature classes and tables linked to the main feature class include different scenarios of an unstable rock slope, field observation points, sampling points for dating, displacement measurement stations, lineaments, unstable areas, run-out areas, areas affected by secondary effects, along with tables for hazard and risk classification and URL links to further documentation and references. The database on unstable rock slopes in Norway will be publicly consultable through an online map service. Factsheets with key information on unstable rock slopes can be automatically generated and downloaded for each site. Areas of possible rock avalanche run-out and their secondary effects displayed in the online map service, along with hazard and risk assessments, will become important tools for land-use planning. The present database will further evolve in the coming years as the systematic mapping progresses and as available techniques and tools evolve.
Finger vein recognition based on the hyperinformation feature
NASA Astrophysics Data System (ADS)
Xi, Xiaoming; Yang, Gongping; Yin, Yilong; Yang, Lu
2014-01-01
The finger vein is a promising biometric pattern for personal identification due to its advantages over other existing biometrics. In finger vein recognition, feature extraction is a critical step, and many feature extraction methods have been proposed to extract the gray, texture, or shape of the finger vein. We treat them as low-level features and present a high-level feature extraction framework. Under this framework, base attribute is first defined to represent the characteristics of a certain subcategory of a subject. Then, for an image, the correlation coefficient is used for constructing the high-level feature, which reflects the correlation between this image and all base attributes. Since the high-level feature can reveal characteristics of more subcategories and contain more discriminative information, we call it hyperinformation feature (HIF). Compared with low-level features, which only represent the characteristics of one subcategory, HIF is more powerful and robust. In order to demonstrate the potential of the proposed framework, we provide a case study to extract HIF. We conduct comprehensive experiments to show the generality of the proposed framework and the efficiency of HIF on our databases, respectively. Experimental results show that HIF significantly outperforms the low-level features.
Digital terrain model generalization incorporating scale, semantic and cognitive constraints
NASA Astrophysics Data System (ADS)
Partsinevelos, Panagiotis; Papadogiorgaki, Maria
2014-05-01
Cartographic generalization is a well-known process accommodating spatial data compression, visualization and comprehension under various scales. In the last few years, there are several international attempts to construct tangible GIS systems, forming real 3D surfaces using a vast number of mechanical parts along a matrix formation (i.e., bars, pistons, vacuums). Usually, moving bars upon a structured grid push a stretching membrane resulting in a smooth visualization for a given surface. Most of these attempts suffer either in their cost, accuracy, resolution and/or speed. Under this perspective, the present study proposes a surface generalization process that incorporates intrinsic constrains of tangible GIS systems including robotic-motor movement and surface stretching limitations. The main objective is to provide optimized visualizations of 3D digital terrain models with minimum loss of information. That is, to minimize the number of pixels in a raster dataset used to define a DTM, while reserving the surface information. This neighborhood type of pixel relations adheres to the basics of Self Organizing Map (SOM) artificial neural networks, which are often used for information abstraction since they are indicative of intrinsic statistical features contained in the input patterns and provide concise and characteristic representations. Nevertheless, SOM remains more like a black box procedure not capable to cope with possible particularities and semantics of the application at hand. E.g. for coastal monitoring applications, the near - coast areas, surrounding mountains and lakes are more important than other features and generalization should be "biased"-stratified to fulfill this requirement. Moreover, according to the application objectives, we extend the SOM algorithm to incorporate special types of information generalization by differentiating the underlying strategy based on topologic information of the objects included in the application. The final research scheme comprises of the combination of SOM with the variations of other widely used generalization algorithms. For instance, an adaptation of the Douglas-Peucker line simplification method in 3D data is used in order to reduce the initial nodes, while maintaining their actual coordinates. Furthermore, additional methods are deployed, aiming to corroborate and verify the significance of each node, such as mathematical algorithms exploiting the pixel's nearest neighbors. Finally, besides the quantitative evaluation of error vs information preservation in a DTM, cognitive inputs from geoscience experts are incorporated in order to test, fine-tune and advance our algorithm. Under the described strategy that incorporates mechanical, topology, semantic and cognitive restrains, results demonstrate the necessity to integrate these characteristics in describing raster DTM surfaces. Acknowledgements: This work is partially supported under the framework of the "Cooperation 2011" project ATLANTAS (11_SYN_6_1937) funded from the Operational Program "Competitiveness and Entrepreneurship" (co-funded by the European Regional Development Fund (ERDF)) and managed by the Greek General Secretariat for Research and Technology.
Classification by Using Multispectral Point Cloud Data
NASA Astrophysics Data System (ADS)
Liao, C. T.; Huang, H. H.
2012-07-01
Remote sensing images are generally recorded in two-dimensional format containing multispectral information. Also, the semantic information is clearly visualized, which ground features can be better recognized and classified via supervised or unsupervised classification methods easily. Nevertheless, the shortcomings of multispectral images are highly depending on light conditions, and classification results lack of three-dimensional semantic information. On the other hand, LiDAR has become a main technology for acquiring high accuracy point cloud data. The advantages of LiDAR are high data acquisition rate, independent of light conditions and can directly produce three-dimensional coordinates. However, comparing with multispectral images, the disadvantage is multispectral information shortage, which remains a challenge in ground feature classification through massive point cloud data. Consequently, by combining the advantages of both LiDAR and multispectral images, point cloud data with three-dimensional coordinates and multispectral information can produce a integrate solution for point cloud classification. Therefore, this research acquires visible light and near infrared images, via close range photogrammetry, by matching images automatically through free online service for multispectral point cloud generation. Then, one can use three-dimensional affine coordinate transformation to compare the data increment. At last, the given threshold of height and color information is set as threshold in classification.
NASA Astrophysics Data System (ADS)
Baccetti, Valentina; Mann, Robert B.; Terno, Daniel R.
Event horizons are the defining feature of classical black holes. They are the key ingredient of the information loss paradox which, as paradoxes in quantum foundations, is built on a combination of predictions of quantum theory and counterfactual classical features: neither horizon formation nor its crossing by a test body can be detected by a distant observer. Furthermore, horizons are unnecessary for the production of Hawking-like radiation. We demonstrate that when this radiation is taken into account, it can prevent horizon crossing/formation in a large class of models. We conjecture that horizon avoidance is a general feature of collapse. The nonexistence of event horizons dispels the paradox, but opens up important questions about thermodynamic properties of the resulting objects and correlations between different degrees of freedom.
Eye guidance during real-world scene search: The role color plays in central and peripheral vision.
Nuthmann, Antje; Malcolm, George L
2016-01-01
The visual system utilizes environmental features to direct gaze efficiently when locating objects. While previous research has isolated various features' contributions to gaze guidance, these studies generally used sparse displays and did not investigate how features facilitated search as a function of their location on the visual field. The current study investigated how features across the visual field--particularly color--facilitate gaze guidance during real-world search. A gaze-contingent window followed participants' eye movements, restricting color information to specified regions. Scene images were presented in full color, with color in the periphery and gray in central vision or gray in the periphery and color in central vision, or in grayscale. Color conditions were crossed with a search cue manipulation, with the target cued either with a word label or an exact picture. Search times increased as color information in the scene decreased. A gaze-data based decomposition of search time revealed color-mediated effects on specific subprocesses of search. Color in peripheral vision facilitated target localization, whereas color in central vision facilitated target verification. Picture cues facilitated search, with the effects of cue specificity and scene color combining additively. When available, the visual system utilizes the environment's color information to facilitate different real-world visual search behaviors based on the location within the visual field.
Black Holes and the Information Paradox
NASA Astrophysics Data System (ADS)
't Hooft, Gerard
In electromagnetism, like charges repel, opposite charges attract. A remarkable feature of the gravitational force is that like masses attract. This gives rise to an instability: the more mass you have, the stronger the attractive force, until an inevitable implosion follows, leading to a "black hole". It is in the black hole where an apparent conflict between Einstein's General Relativity and the laws of Quantum Mechanics becomes manifest. Most physicists now agree that a black hole should be described by a Schrödinger equation, with a Hermitean Hamiltonian, but this requires a modification of general relativity. Both General Relativity and Quantum mechanics are shaking on their foundations.
NASA Astrophysics Data System (ADS)
Zha, Yuanyuan; Yeh, Tian-Chyi J.; Illman, Walter A.; Onoe, Hironori; Mok, Chin Man W.; Wen, Jet-Chau; Huang, Shao-Yang; Wang, Wenke
2017-04-01
Hydraulic tomography (HT) has become a mature aquifer test technology over the last two decades. It collects nonredundant information of aquifer heterogeneity by sequentially stressing the aquifer at different wells and collecting aquifer responses at other wells during each stress. The collected information is then interpreted by inverse models. Among these models, the geostatistical approaches, built upon the Bayesian framework, first conceptualize hydraulic properties to be estimated as random fields, which are characterized by means and covariance functions. They then use the spatial statistics as prior information with the aquifer response data to estimate the spatial distribution of the hydraulic properties at a site. Since the spatial statistics describe the generic spatial structures of the geologic media at the site rather than site-specific ones (e.g., known spatial distributions of facies, faults, or paleochannels), the estimates are often not optimal. To improve the estimates, we introduce a general statistical framework, which allows the inclusion of site-specific spatial patterns of geologic features. Subsequently, we test this approach with synthetic numerical experiments. Results show that this approach, using conditional mean and covariance that reflect site-specific large-scale geologic features, indeed improves the HT estimates. Afterward, this approach is applied to HT surveys at a kilometer-scale-fractured granite field site with a distinct fault zone. We find that by including fault information from outcrops and boreholes for HT analysis, the estimated hydraulic properties are improved. The improved estimates subsequently lead to better prediction of flow during a different pumping test at the site.
IMAGE 100: The interactive multispectral image processing system
NASA Technical Reports Server (NTRS)
Schaller, E. S.; Towles, R. W.
1975-01-01
The need for rapid, cost-effective extraction of useful information from vast quantities of multispectral imagery available from aircraft or spacecraft has resulted in the design, implementation and application of a state-of-the-art processing system known as IMAGE 100. Operating on the general principle that all objects or materials possess unique spectral characteristics or signatures, the system uses this signature uniqueness to identify similar features in an image by simultaneously analyzing signatures in multiple frequency bands. Pseudo-colors, or themes, are assigned to features having identical spectral characteristics. These themes are displayed on a color CRT, and may be recorded on tape, film, or other media. The system was designed to incorporate key features such as interactive operation, user-oriented displays and controls, and rapid-response machine processing. Owing to these features, the user can readily control and/or modify the analysis process based on his knowledge of the input imagery. Effective use can be made of conventional photographic interpretation skills and state-of-the-art machine analysis techniques in the extraction of useful information from multispectral imagery. This approach results in highly accurate multitheme classification of imagery in seconds or minutes rather than the hours often involved in processing using other means.
Boykin, K.G.; Thompson, B.C.; Propeck-Gray, S.
2010-01-01
Despite widespread and long-standing efforts to model wildlife-habitat associations using remotely sensed and other spatially explicit data, there are relatively few evaluations of the performance of variables included in predictive models relative to actual features on the landscape. As part of the National Gap Analysis Program, we specifically examined physical site features at randomly selected sample locations in the Southwestern U.S. to assess degree of concordance with predicted features used in modeling vertebrate habitat distribution. Our analysis considered hypotheses about relative accuracy with respect to 30 vertebrate species selected to represent the spectrum of habitat generalist to specialist and categorization of site by relative degree of conservation emphasis accorded to the site. Overall comparison of 19 variables observed at 382 sample sites indicated ???60% concordance for 12 variables. Directly measured or observed variables (slope, soil composition, rock outcrop) generally displayed high concordance, while variables that required judgments regarding descriptive categories (aspect, ecological system, landform) were less concordant. There were no differences detected in concordance among taxa groups, degree of specialization or generalization of selected taxa, or land conservation categorization of sample sites with respect to all sites. We found no support for the hypothesis that accuracy of habitat models is inversely related to degree of taxa specialization when model features for a habitat specialist could be more difficult to represent spatially. Likewise, we did not find support for the hypothesis that physical features will be predicted with higher accuracy on lands with greater dedication to biodiversity conservation than on other lands because of relative differences regarding available information. Accuracy generally was similar (>60%) to that observed for land cover mapping at the ecological system level. These patterns demonstrate resilience of gap analysis deductive model processes to the type of remotely sensed or interpreted data used in habitat feature predictions. ?? 2010 Elsevier B.V.
A radiation scalar for numerical relativity.
Beetle, Christopher; Burko, Lior M
2002-12-30
This Letter describes a scalar curvature invariant for general relativity with a certain, distinctive feature. While many such invariants exist, this one vanishes in regions of space-time which can be said unambiguously to contain no gravitational radiation. In more general regions which incontrovertibly support nontrivial radiation fields, it can be used to extract local, coordinate-independent information partially characterizing that radiation. While a clear, physical interpretation is possible only in such radiation zones, a simple algorithm can be given to extend the definition smoothly to generic regions of space-time.
Wen, Ping-Ping; Shi, Shao-Ping; Xu, Hao-Dong; Wang, Li-Na; Qiu, Jian-Ding
2016-10-15
As one of the most important reversible types of post-translational modification, protein methylation catalyzed by methyltransferases carries many pivotal biological functions as well as many essential biological processes. Identification of methylation sites is prerequisite for decoding methylation regulatory networks in living cells and understanding their physiological roles. Experimental methods are limitations of labor-intensive and time-consuming. While in silicon approaches are cost-effective and high-throughput manner to predict potential methylation sites, but those previous predictors only have a mixed model and their prediction performances are not fully satisfactory now. Recently, with increasing availability of quantitative methylation datasets in diverse species (especially in eukaryotes), there is a growing need to develop a species-specific predictor. Here, we designed a tool named PSSMe based on information gain (IG) feature optimization method for species-specific methylation site prediction. The IG method was adopted to analyze the importance and contribution of each feature, then select the valuable dimension feature vectors to reconstitute a new orderly feature, which was applied to build the finally prediction model. Finally, our method improves prediction performance of accuracy about 15% comparing with single features. Furthermore, our species-specific model significantly improves the predictive performance compare with other general methylation prediction tools. Hence, our prediction results serve as useful resources to elucidate the mechanism of arginine or lysine methylation and facilitate hypothesis-driven experimental design and validation. The tool online service is implemented by C# language and freely available at http://bioinfo.ncu.edu.cn/PSSMe.aspx CONTACT: jdqiu@ncu.edu.cnSupplementary information: Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Integrated feature extraction and selection for neuroimage classification
NASA Astrophysics Data System (ADS)
Fan, Yong; Shen, Dinggang
2009-02-01
Feature extraction and selection are of great importance in neuroimage classification for identifying informative features and reducing feature dimensionality, which are generally implemented as two separate steps. This paper presents an integrated feature extraction and selection algorithm with two iterative steps: constrained subspace learning based feature extraction and support vector machine (SVM) based feature selection. The subspace learning based feature extraction focuses on the brain regions with higher possibility of being affected by the disease under study, while the possibility of brain regions being affected by disease is estimated by the SVM based feature selection, in conjunction with SVM classification. This algorithm can not only take into account the inter-correlation among different brain regions, but also overcome the limitation of traditional subspace learning based feature extraction methods. To achieve robust performance and optimal selection of parameters involved in feature extraction, selection, and classification, a bootstrapping strategy is used to generate multiple versions of training and testing sets for parameter optimization, according to the classification performance measured by the area under the ROC (receiver operating characteristic) curve. The integrated feature extraction and selection method is applied to a structural MR image based Alzheimer's disease (AD) study with 98 non-demented and 100 demented subjects. Cross-validation results indicate that the proposed algorithm can improve performance of the traditional subspace learning based classification.
Vilcu, Ileana; Probst, Lilli; Dorjsuren, Bayarsaikhan; Mathauer, Inke
2016-10-04
Many low- and middle-income countries with a social health insurance system face challenges on their road towards universal health coverage (UHC), especially for people in the informal sector and vulnerable population groups or the informally employed. One way to address this is to subsidize their contributions through general government revenue transfers to the health insurance fund. This paper provides an overview of such health financing arrangements in Asian low- and middle-income countries. The purpose is to assess the institutional design features of government subsidized health insurance type arrangements for vulnerable and informally employed population groups and to explore how these features contribute to UHC progress. This regional study is based on a literature search to collect country information on the specific institutional design features of such subsidization arrangements and data related to UHC progress indicators, i.e. population coverage, financial protection and access to care. The institutional design analysis focuses on eligibility rules, targeting and enrolment procedures; financing arrangements; the pooling architecture; and benefit entitlements. Such financing arrangements currently exist in 8 countries with a total of 14 subsidization schemes. The most frequent groups covered are the poor, older persons and children. Membership in these arrangements is mostly mandatory as is full subsidization. An integrated pool for both the subsidized and the contributors exists in half of the countries, which is one of the most decisive features for equitable access and financial protection. Nonetheless, in most schemes, utilization rates of the subsidized are higher compared to the uninsured, but still lower compared to insured formal sector employees. Total population coverage rates, as well as a higher share of the subsidized in the total insured population are related with broader eligibility criteria. Overall, government subsidized health insurance type arrangements can be effective mechanism to help countries progress towards UHC, yet there is potential to improve on institutional design features as well as implementation.
Percolation Model of Sensory Transmission and Loss of Consciousness Under General Anesthesia
NASA Astrophysics Data System (ADS)
Zhou, David W.; Mowrey, David D.; Tang, Pei; Xu, Yan
2015-09-01
Neurons communicate with each other dynamically; how such communications lead to consciousness remains unclear. Here, we present a theoretical model to understand the dynamic nature of sensory activity and information integration in a hierarchical network, in which edges are stochastically defined by a single parameter p representing the percolation probability of information transmission. We validate the model by comparing the transmitted and original signal distributions, and we show that a basic version of this model can reproduce key spectral features clinically observed in electroencephalographic recordings of transitions from conscious to unconscious brain activities during general anesthesia. As p decreases, a steep divergence of the transmitted signal from the original was observed, along with a loss of signal synchrony and a sharp increase in information entropy in a critical manner; this resembles the precipitous loss of consciousness during anesthesia. The model offers mechanistic insights into the emergence of information integration from a stochastic process, laying the foundation for understanding the origin of cognition.
Image search engine with selective filtering and feature-element-based classification
NASA Astrophysics Data System (ADS)
Li, Qing; Zhang, Yujin; Dai, Shengyang
2001-12-01
With the growth of Internet and storage capability in recent years, image has become a widespread information format in World Wide Web. However, it has become increasingly harder to search for images of interest, and effective image search engine for the WWW needs to be developed. We propose in this paper a selective filtering process and a novel approach for image classification based on feature element in the image search engine we developed for the WWW. First a selective filtering process is embedded in a general web crawler to filter out the meaningless images with GIF format. Two parameters that can be obtained easily are used in the filtering process. Our classification approach first extract feature elements from images instead of feature vectors. Compared with feature vectors, feature elements can better capture visual meanings of the image according to subjective perception of human beings. Different from traditional image classification method, our classification approach based on feature element doesn't calculate the distance between two vectors in the feature space, while trying to find associations between feature element and class attribute of the image. Experiments are presented to show the efficiency of the proposed approach.
MCNP Version 6.2 Release Notes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Werner, Christopher John; Bull, Jeffrey S.; Solomon, C. J.
Monte Carlo N-Particle or MCNP ® is a general-purpose Monte Carlo radiation-transport code designed to track many particle types over broad ranges of energies. This MCNP Version 6.2 follows the MCNP6.1.1 beta version and has been released in order to provide the radiation transport community with the latest feature developments and bug fixes for MCNP. Since the last release of MCNP major work has been conducted to improve the code base, add features, and provide tools to facilitate ease of use of MCNP version 6.2 as well as the analysis of results. These release notes serve as a general guidemore » for the new/improved physics, source, data, tallies, unstructured mesh, code enhancements and tools. For more detailed information on each of the topics, please refer to the appropriate references or the user manual which can be found at http://mcnp.lanl.gov. This release of MCNP version 6.2 contains 39 new features in addition to 172 bug fixes and code enhancements. There are still some 33 known issues the user should familiarize themselves with (see Appendix).« less
Sweidan, Michelle; Williamson, Margaret; Reeve, James F; Harvey, Ken; O'Neill, Jennifer A; Schattner, Peter; Snowdon, Teri
2010-04-15
Electronic prescribing is increasingly being used in primary care and in hospitals. Studies on the effects of e-prescribing systems have found evidence for both benefit and harm. The aim of this study was to identify features of e-prescribing software systems that support patient safety and quality of care and that are useful to the clinician and the patient, with a focus on improving the quality use of medicines. Software features were identified by a literature review, key informants and an expert group. A modified Delphi process was used with a 12-member multidisciplinary expert group to reach consensus on the expected impact of the features in four domains: patient safety, quality of care, usefulness to the clinician and usefulness to the patient. The setting was electronic prescribing in general practice in Australia. A list of 114 software features was developed. Most of the features relate to the recording and use of patient data, the medication selection process, prescribing decision support, monitoring drug therapy and clinical reports. The expert group rated 78 of the features (68%) as likely to have a high positive impact in at least one domain, 36 features (32%) as medium impact, and none as low or negative impact. Twenty seven features were rated as high positive impact across 3 or 4 domains including patient safety and quality of care. Ten features were considered "aspirational" because of a lack of agreed standards and/or suitable knowledge bases. This study defines features of e-prescribing software systems that are expected to support safety and quality, especially in relation to prescribing and use of medicines in general practice. The features could be used to develop software standards, and could be adapted if necessary for use in other settings and countries.
2010-01-01
Background Electronic prescribing is increasingly being used in primary care and in hospitals. Studies on the effects of e-prescribing systems have found evidence for both benefit and harm. The aim of this study was to identify features of e-prescribing software systems that support patient safety and quality of care and that are useful to the clinician and the patient, with a focus on improving the quality use of medicines. Methods Software features were identified by a literature review, key informants and an expert group. A modified Delphi process was used with a 12-member multidisciplinary expert group to reach consensus on the expected impact of the features in four domains: patient safety, quality of care, usefulness to the clinician and usefulness to the patient. The setting was electronic prescribing in general practice in Australia. Results A list of 114 software features was developed. Most of the features relate to the recording and use of patient data, the medication selection process, prescribing decision support, monitoring drug therapy and clinical reports. The expert group rated 78 of the features (68%) as likely to have a high positive impact in at least one domain, 36 features (32%) as medium impact, and none as low or negative impact. Twenty seven features were rated as high positive impact across 3 or 4 domains including patient safety and quality of care. Ten features were considered "aspirational" because of a lack of agreed standards and/or suitable knowledge bases. Conclusions This study defines features of e-prescribing software systems that are expected to support safety and quality, especially in relation to prescribing and use of medicines in general practice. The features could be used to develop software standards, and could be adapted if necessary for use in other settings and countries. PMID:20398294
Object-Based Arctic Sea Ice Feature Extraction through High Spatial Resolution Aerial photos
NASA Astrophysics Data System (ADS)
Miao, X.; Xie, H.
2015-12-01
High resolution aerial photographs used to detect and classify sea ice features can provide accurate physical parameters to refine, validate, and improve climate models. However, manually delineating sea ice features, such as melt ponds, submerged ice, water, ice/snow, and pressure ridges, is time-consuming and labor-intensive. An object-based classification algorithm is developed to automatically extract sea ice features efficiently from aerial photographs taken during the Chinese National Arctic Research Expedition in summer 2010 (CHINARE 2010) in the MIZ near the Alaska coast. The algorithm includes four steps: (1) the image segmentation groups the neighboring pixels into objects based on the similarity of spectral and textural information; (2) the random forest classifier distinguishes four general classes: water, general submerged ice (GSI, including melt ponds and submerged ice), shadow, and ice/snow; (3) the polygon neighbor analysis separates melt ponds and submerged ice based on spatial relationship; and (4) pressure ridge features are extracted from shadow based on local illumination geometry. The producer's accuracy of 90.8% and user's accuracy of 91.8% are achieved for melt pond detection, and shadow shows a user's accuracy of 88.9% and producer's accuracies of 91.4%. Finally, pond density, pond fraction, ice floes, mean ice concentration, average ridge height, ridge profile, and ridge frequency are extracted from batch processing of aerial photos, and their uncertainties are estimated.
ERIC Educational Resources Information Center
Erickson, Frederick
The limits and boundaries of anthropology are briefly discussed, along with a general description of lay attitudes towards the field. A research case is given to illustrate the way in which anthropological study methods can contribute to educational research. Noted among these contributions is an informed distrust that anthropologists exhibit…
ERIC Educational Resources Information Center
Brownlee, Jamie
2015-01-01
In Canada, universities are undergoing a process of corporatization where business interests, values and practices are assuming a more prominent place in higher education. A key feature of this process has been the changing composition of academic labor. While it is generally accepted that universities are relying more heavily on contract faculty,…
Alexander H. Smith
1965-01-01
Generally speaking the false truffles (Hymenogastrales) are so rarely found in abundance that ecological observations on them are necessarily fragmentary, and studies of the variability of species within, as well as between, populations are fraught with difficulties caused on the one hand by inadequate information on basic features of the supposedly well known species...
Mammographic mass classification based on possibility theory
NASA Astrophysics Data System (ADS)
Hmida, Marwa; Hamrouni, Kamel; Solaiman, Basel; Boussetta, Sana
2017-03-01
Shape and margin features are very important for differentiating between benign and malignant masses in mammographic images. In fact, benign masses are usually round and oval and have smooth contours. However, malignant tumors have generally irregular shape and appear lobulated or speculated in margins. This knowledge suffers from imprecision and ambiguity. Therefore, this paper deals with the problem of mass classification by using shape and margin features while taking into account the uncertainty linked to the degree of truth of the available information and the imprecision related to its content. Thus, in this work, we proposed a novel mass classification approach which provides a possibility based representation of the extracted shape features and builds a possibility knowledge basis in order to evaluate the possibility degree of malignancy and benignity for each mass. For experimentation, the MIAS database was used and the classification results show the great performance of our approach in spite of using simple features.
Double-u double-u double-u dot APIC dot org: a review of the APIC World Wide Web site.
Harr, J
1996-12-01
The widespread use of the Internet and the development of the World Wide Web have led to a revolution in electronic communication and information access. The Association for Professional in Infection Control and Epidemiology (APIC) has developed a site on the World Wide Web to provide mechanisms for international on-line information access and exchange on issues related to the practice of infection control and the application of epidemiology. From the home page of the APIC Web site, users can access information on professional resources, publications, educational offering, governmental affairs, the APIC organization, and the infection control profession. Among the chief features of the site is a discussion forum for posing questions and sharing information about infection control and epidemiology. The site also contains a searchable database of practice-related abstracts and descriptions and order forms for APIC publications. Users will find continuing education course descriptions and registration forms, legislative and regulatory action alerts and a congressional mailer, chapter and committee information, and infection control information of interest to the general public. APIC is considering several potential future enhancements to their Web site and will continue to review the site's content and features to provide current and useful information to infection control professionals.
Dynamic adaptive learning for decision-making supporting systems
NASA Astrophysics Data System (ADS)
He, Haibo; Cao, Yuan; Chen, Sheng; Desai, Sachi; Hohil, Myron E.
2008-03-01
This paper proposes a novel adaptive learning method for data mining in support of decision-making systems. Due to the inherent characteristics of information ambiguity/uncertainty, high dimensionality and noisy in many homeland security and defense applications, such as surveillances, monitoring, net-centric battlefield, and others, it is critical to develop autonomous learning methods to efficiently learn useful information from raw data to help the decision making process. The proposed method is based on a dynamic learning principle in the feature spaces. Generally speaking, conventional approaches of learning from high dimensional data sets include various feature extraction (principal component analysis, wavelet transform, and others) and feature selection (embedded approach, wrapper approach, filter approach, and others) methods. However, very limited understandings of adaptive learning from different feature spaces have been achieved. We propose an integrative approach that takes advantages of feature selection and hypothesis ensemble techniques to achieve our goal. Based on the training data distributions, a feature score function is used to provide a measurement of the importance of different features for learning purpose. Then multiple hypotheses are iteratively developed in different feature spaces according to their learning capabilities. Unlike the pre-set iteration steps in many of the existing ensemble learning approaches, such as adaptive boosting (AdaBoost) method, the iterative learning process will automatically stop when the intelligent system can not provide a better understanding than a random guess in that particular subset of feature spaces. Finally, a voting algorithm is used to combine all the decisions from different hypotheses to provide the final prediction results. Simulation analyses of the proposed method on classification of different US military aircraft databases show the effectiveness of this method.
Attention in the processing of complex visual displays: detecting features and their combinations.
Farell, B
1984-02-01
The distinction between operations in visual processing that are parallel and preattentive and those that are serial and attentional receives both theoretical and empirical support. According to Treisman's feature-integration theory, independent features are available preattentively, but attention is required to veridically combine features into objects. Certain evidence supporting this theory is consistent with a different interpretation, which was tested in four experiments. The first experiment compared the detection of features and feature combinations while eliminating a factor that confounded earlier comparisons. The resulting priority of access to combinatorial information suggests that features and nonlocal combinations of features are not connected solely by a bottom-up hierarchical convergence. Causes of the disparity between the results of Experiment 1 and the results of previous research were investigated in three subsequent experiments. The results showed that of the two confounded factors, it was the difference in the mapping of alternatives onto responses, not the differing attentional demands of features and objects, that underlaid the results of the previous research. The present results are thus counterexamples to the feature-integration theory. Aspects of this theory are shown to be subsumed by more general principles, which are discussed in terms of attentional processes in the detection of features, objects, and stimulus alternatives.
Brown, T A
1997-10-01
To examine the nature and conceptualization of generalized anxiety disorder (GAD) and chronic worry as well as data bearing on the validity of GAD as a distinct diagnosis. Narrative literature review. Although a wealth of data have been obtained on the epidemiology, genetics, and nature of GAD, many important questions remain regarding the validity of current conceptual models of pathological worry and the discriminability of GAD from certain emotional disorders (for instance, mood disorders) and higher-order trait vulnerability dimensions (for example, negative affect). Because the constituent features of GAD are salient to current conceptual models of emotional disorders (for example, models that implicate negative affect or worry/anxious apprehension as vulnerability factors), research on the nature of GAD and its associated features should provide important information on the pathogenesis, course, and co-occurrence of the entire range of anxiety and mood disorders.
NASA Astrophysics Data System (ADS)
Tsao, Sinchai; Gajawelli, Niharika; Zhou, Jiayu; Shi, Jie; Ye, Jieping; Wang, Yalin; Lepore, Natasha
2014-03-01
Prediction of Alzheimers disease (AD) progression based on baseline measures allows us to understand disease progression and has implications in decisions concerning treatment strategy. To this end we combine a predictive multi-task machine learning method1 with novel MR-based multivariate morphometric surface map of the hippocampus2 to predict future cognitive scores of patients. Previous work by Zhou et al.1 has shown that a multi-task learning framework that performs prediction of all future time points (or tasks) simultaneously can be used to encode both sparsity as well as temporal smoothness. They showed that this can be used in predicting cognitive outcomes of Alzheimers Disease Neuroimaging Initiative (ADNI) subjects based on FreeSurfer-based baseline MRI features, MMSE score demographic information and ApoE status. Whilst volumetric information may hold generalized information on brain status, we hypothesized that hippocampus specific information may be more useful in predictive modeling of AD. To this end, we applied Shi et al.2s recently developed multivariate tensor-based (mTBM) parametric surface analysis method to extract features from the hippocampal surface. We show that by combining the power of the multi-task framework with the sensitivity of mTBM features of the hippocampus surface, we are able to improve significantly improve predictive performance of ADAS cognitive scores 6, 12, 24, 36 and 48 months from baseline.
Memory Loss, Alzheimer's Disease and General Anesthesia: A Preoperative Concern.
Thaler, Adam; Siry, Read; Cai, Lufan; García, Paul S; Chen, Linda; Liu, Renyu
2012-02-20
The long-term cognitive effects of general anesthesia are under intense scrutiny. Here we present 5 cases from 2 academic institutions to analyze some common features where the patient's or the patient family member has made a request to address their concern on memory loss, Alzheimer's disease and general anesthesia before surgery. Records of anesthesia consultation separate from standard preoperative evaluation were retrieved to identify consultations related to memory loss and Alzheimer's disease from the patient and/or patient family members. The identified cases were extensively reviewed for features in common. We used Google® (http://www. google.com/) to identify available online information using "anesthesia memory loss" as a search phrase. Five cases were collected as a specific preoperative consultation related to memory loss, Alzheimer's disease and general anesthesia from two institutions. All of the individuals either had perceived memory impairment after a prior surgical procedure with general anesthesia or had a family member with Alzheimer's disease. They all accessed public media sources to find articles related to anesthesia and memory loss. On May 2 nd , 2011, searching "anesthesia memory loss" in Google yielded 764,000 hits. Only 3 of the 50 Google top hits were from peer-reviewed journals. Some of the lay media postings made a causal association between general anesthesia and memory loss and/or Alzheimer's disease without conclusive scientific literature support. The potential link between memory loss and Alzheimer's disease with general anesthesia is an important preoperative concern from patients and their family members. This concern arises from individuals who have had history of cognitive impairment or have had a family member with Alzheimer disease and have tried to obtain information from public media. Proper preoperative consultation with the awareness of the lay literature can be useful in reducing patient and patient family member's preoperative anxiety related to this concern.
Memory Loss, Alzheimer’s Disease and General Anesthesia: A Preoperative Concern
Thaler, Adam; Siry, Read; Cai, Lufan; García, Paul S.; Chen, Linda; Liu, RenYu
2012-01-01
Background The long-term cognitive effects of general anesthesia are under intense scrutiny. Here we present 5 cases from 2 academic institutions to analyze some common features where the patient’s or the patient family member has made a request to address their concern on memory loss, Alzheimer’s disease and general anesthesia before surgery. Methods Records of anesthesia consultation separate from standard preoperative evaluation were retrieved to identify consultations related to memory loss and Alzheimer’s disease from the patient and/or patient family members. The identified cases were extensively reviewed for features in common. We used Google® (http://www. google.com/) to identify available online information using “anesthesia memory loss” as a search phrase. Results Five cases were collected as a specific preoperative consultation related to memory loss, Alzheimer’s disease and general anesthesia from two institutions. All of the individuals either had perceived memory impairment after a prior surgical procedure with general anesthesia or had a family member with Alzheimer’s disease. They all accessed public media sources to find articles related to anesthesia and memory loss. On May 2nd, 2011, searching “anesthesia memory loss” in Google yielded 764,000 hits. Only 3 of the 50 Google top hits were from peer-reviewed journals. Some of the lay media postings made a causal association between general anesthesia and memory loss and/or Alzheimer’s disease without conclusive scientific literature support. Conclusion The potential link between memory loss and Alzheimer’s disease with general anesthesia is an important preoperative concern from patients and their family members. This concern arises from individuals who have had history of cognitive impairment or have had a family member with Alzheimer disease and have tried to obtain information from public media. Proper preoperative consultation with the awareness of the lay literature can be useful in reducing patient and patient family member’s preoperative anxiety related to this concern. PMID:23853740
SU-E-QI-17: Dependence of 3D/4D PET Quantitative Image Features On Noise
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oliver, J; Budzevich, M; Zhang, G
2014-06-15
Purpose: Quantitative imaging is a fast evolving discipline where a large number of features are extracted from images; i.e., radiomics. Some features have been shown to have diagnostic, prognostic and predictive value. However, they are sensitive to acquisition and processing factors; e.g., noise. In this study noise was added to positron emission tomography (PET) images to determine how features were affected by noise. Methods: Three levels of Gaussian noise were added to 8 lung cancer patients PET images acquired in 3D mode (static) and using respiratory tracking (4D); for the latter images from one of 10 phases were used. Amore » total of 62 features: 14 shape, 19 intensity (1stO), 18 GLCM textures (2ndO; from grey level co-occurrence matrices) and 11 RLM textures (2ndO; from run-length matrices) features were extracted from segmented tumors. Dimensions of GLCM were 256×256, calculated using 3D images with a step size of 1 voxel in 13 directions. Grey levels were binned into 256 levels for RLM and features were calculated in all 13 directions. Results: Feature variation generally increased with noise. Shape features were the most stable while RLM were the most unstable. Intensity and GLCM features performed well; the latter being more robust. The most stable 1stO features were compactness, maximum and minimum length, standard deviation, root-mean-squared, I30, V10-V90, and entropy. The most stable 2ndO features were entropy, sum-average, sum-entropy, difference-average, difference-variance, difference-entropy, information-correlation-2, short-run-emphasis, long-run-emphasis, and run-percentage. In general, features computed from images from one of the phases of 4D scans were more stable than from 3D scans. Conclusion: This study shows the need to characterize image features carefully before they are used in research and medical applications. It also shows that the performance of features, and thereby feature selection, may be assessed in part by noise analysis.« less
Surface and airborne evidence for plumes and winds on triton
Hansen, C.J.; McEwen, A.S.; Ingersoll, A.P.; Terrile, R.J.
1990-01-01
Aeolian features on Triton that were imaged during the Voyager Mission have been grouped. The term "aeolian feature" is broadly defined as features produced by or blown by the wind, including surface and airborne materials. Observations of the latitudinal distributions of the features probably associated with current activity (known plumes, crescent streaks, fixed terminator clouds, and limb haze with overshoot) all occur from latitude -37?? to latitude -62??. Likely indicators of previous activity (dark surface streaks) occur from latitude -5?? to -70??, but are most abundant from -15?? to -45??, generally north of currently active features. Those indicators which give information on wind direction and speed have been measured. Wind direction is a function of altitude. The predominant direction of the surface wind streaks is found to be between 40?? and 80?? measured clockwise from north. The average orientation of streaks in the northeast quadrant is 59??. Winds at 1- to 3-kilometer altitude are eastward, while those at >8 kilometers blow west.
Less is More: How manipulative features affect children’s learning from picture books
Tare, Medha; Chiong, Cynthia; Ganea, Patricia; DeLoache, Judy
2010-01-01
Picture books are ubiquitous in young children’s lives and are assumed to support children’s acquisition of information about the world. Given their importance, relatively little research has directly examined children’s learning from picture books. We report two studies examining children’s acquisition of labels and facts from picture books that vary on two dimensions: iconicity of the pictures and presence of manipulative features (or “pop-ups”). In Study 1, 20-month-old children generalized novel labels less well when taught from a book with manipulative features than from standard picture books without such elements. In Study 2, 30- and 36-month-old children learned fewer facts when taught from a manipulative picture book with drawings than from a standard picture book with realistic images and no manipulative features. The results of the two studies indicate that children’s learning from picture books is facilitated by realistic illustrations, but impeded by manipulative features. PMID:20948970
Information on black-footed ferret biology collected within the framework of ferret conservation
Biggins, Dean E.
2012-01-01
Once feared to be extinct, black-footed ferrets (Mustela nigripes) were rediscovered near Meeteetse, Wyoming, in 1981, resulting in renewed conservation and research efforts for this highly endangered species. A need for information directly useful to recovery has motivated much monitoring of ferrets since that time, but field activities have enabled collection of data relevant to broader biological themes. This special feature is placed in a context of similar books and proceedings devoted to ferret biology and conservation. Articles include general observations on ferrets, modeling of potential impacts of ferrets on prairie dogs (Cynomys spp.), discussions on relationships of ferrets to prairie dog habitats at several spatial scales (from individual burrows to patches of burrow systems) and a general treatise on the status of black-footed ferret recovery.
Theoretical information measurement in nonrelativistic time-dependent approach
NASA Astrophysics Data System (ADS)
Najafizade, S. A.; Hassanabadi, H.; Zarrinkamar, S.
2018-02-01
The information-theoretic measures of time-dependent Schrödinger equation are investigated via the Shannon information entropy, variance and local Fisher quantities. In our calculations, we consider the two first states n = 0,1 and obtain the position Sx (t) and momentum Sp (t) Shannon entropies as well as Fisher information Ix (t) in position and momentum Ip (t) spaces. Using the Fourier transformed wave function, we obtain the results in momentum space. Some interesting features of the information entropy densities ρs (x,t) and γs (p,t), as well as the probability densities ρ (x,t) and γ (p,t) for time-dependent states are demonstrated. We establish a general relation between variance and Fisher's information. The Bialynicki-Birula-Mycielski inequality is tested and verified for the states n = 0,1.
Neural net target-tracking system using structured laser patterns
NASA Astrophysics Data System (ADS)
Cho, Jae-Wan; Lee, Yong-Bum; Lee, Nam-Ho; Park, Soon-Yong; Lee, Jongmin; Choi, Gapchu; Baek, Sunghyun; Park, Dong-Sun
1996-06-01
In this paper, we describe a robot endeffector tracking system using sensory information from recently-announced structured pattern laser diodes, which can generate images with several different types of structured pattern. The neural network approach is employed to recognize the robot endeffector covering the situation of three types of motion: translation, scaling and rotation. Features for the neural network to detect the position of the endeffector are extracted from the preprocessed images. Artificial neural networks are used to store models and to match with unknown input features recognizing the position of the robot endeffector. Since a minimal number of samples are used for different directions of the robot endeffector in the system, an artificial neural network with the generalization capability can be utilized for unknown input features. A feedforward neural network with the generalization capability can be utilized for unknown input features. A feedforward neural network trained with the back propagation learning is used to detect the position of the robot endeffector. Another feedforward neural network module is used to estimate the motion from a sequence of images and to control movements of the robot endeffector. COmbining the tow neural networks for recognizing the robot endeffector and estimating the motion with the preprocessing stage, the whole system keeps tracking of the robot endeffector effectively.
Complex Environmental Data Modelling Using Adaptive General Regression Neural Networks
NASA Astrophysics Data System (ADS)
Kanevski, Mikhail
2015-04-01
The research deals with an adaptation and application of Adaptive General Regression Neural Networks (GRNN) to high dimensional environmental data. GRNN [1,2,3] are efficient modelling tools both for spatial and temporal data and are based on nonparametric kernel methods closely related to classical Nadaraya-Watson estimator. Adaptive GRNN, using anisotropic kernels, can be also applied for features selection tasks when working with high dimensional data [1,3]. In the present research Adaptive GRNN are used to study geospatial data predictability and relevant feature selection using both simulated and real data case studies. The original raw data were either three dimensional monthly precipitation data or monthly wind speeds embedded into 13 dimensional space constructed by geographical coordinates and geo-features calculated from digital elevation model. GRNN were applied in two different ways: 1) adaptive GRNN with the resulting list of features ordered according to their relevancy; and 2) adaptive GRNN applied to evaluate all possible models N [in case of wind fields N=(2^13 -1)=8191] and rank them according to the cross-validation error. In both cases training were carried out applying leave-one-out procedure. An important result of the study is that the set of the most relevant features depends on the month (strong seasonal effect) and year. The predictabilities of precipitation and wind field patterns, estimated using the cross-validation and testing errors of raw and shuffled data, were studied in detail. The results of both approaches were qualitatively and quantitatively compared. In conclusion, Adaptive GRNN with their ability to select features and efficient modelling of complex high dimensional data can be widely used in automatic/on-line mapping and as an integrated part of environmental decision support systems. 1. Kanevski M., Pozdnoukhov A., Timonin V. Machine Learning for Spatial Environmental Data. Theory, applications and software. EPFL Press. With a CD: data, software, guides. (2009). 2. Kanevski M. Spatial Predictions of Soil Contamination Using General Regression Neural Networks. Systems Research and Information Systems, Volume 8, number 4, 1999. 3. Robert S., Foresti L., Kanevski M. Spatial prediction of monthly wind speeds in complex terrain with adaptive general regression neural networks. International Journal of Climatology, 33 pp. 1793-1804, 2013.
Hausman, Bernice L; Cashion, Margaret; Lucchesi, Nicholas; Patel, Kelsey; Roberts, Jonathan
2015-01-01
Background Current concerns about vaccination resistance often cite the Internet as a source of vaccine controversy. Most academic studies of vaccine resistance online use quantitative methods to describe misinformation on vaccine-skeptical websites. Findings from these studies are useful for categorizing the generic features of these websites, but they do not provide insights into why these websites successfully persuade their viewers. To date, there have been few attempts to understand, qualitatively, the persuasive features of provaccine or vaccine-skeptical websites. Objective The purpose of this research was to examine the persuasive features of provaccine and vaccine-skeptical websites. The qualitative analysis was conducted to generate hypotheses concerning what features of these websites are persuasive to people seeking information about vaccination and vaccine-related practices. Methods This study employed a fully qualitative case study methodology that used the anthropological method of thick description to detail and carefully review the rhetorical features of 1 provaccine government website, 1 provaccine hospital website, 1 vaccine-skeptical information website focused on general vaccine safety, and 1 vaccine-skeptical website focused on a specific vaccine. The data gathered were organized into 5 domains: website ownership, visual and textual content, user experience, hyperlinking, and social interactivity. Results The study found that the 2 provaccine websites analyzed functioned as encyclopedias of vaccine information. Both of the websites had relatively small digital ecologies because they only linked to government websites or websites that endorsed vaccination and evidence-based medicine. Neither of these websites offered visitors interactive features or made extensive use of the affordances of Web 2.0. The study also found that the 2 vaccine-skeptical websites had larger digital ecologies because they linked to a variety of vaccine-related websites, including government websites. They leveraged the affordances of Web 2.0 with their interactive features and digital media. Conclusions By employing a rhetorical framework, this study found that the provaccine websites analyzed concentrate on the accurate transmission of evidence-based scientific research about vaccines and government-endorsed vaccination-related practices, whereas the vaccine-skeptical websites focus on creating communities of people affected by vaccines and vaccine-related practices. From this personal framework, these websites then challenge the information presented in scientific literature and government documents. At the same time, the vaccine-skeptical websites in this study are repositories of vaccine information and vaccination-related resources. Future studies on vaccination and the Internet should take into consideration the rhetorical features of provaccine and vaccine-skeptical websites and further investigate the influence of Web 2.0 community-building features on people seeking information about vaccine-related practices. PMID:26024907
NASA Astrophysics Data System (ADS)
Caesarendra, W.; Kosasih, B.; Tjahjowidodo, T.; Ariyanto, M.; Daryl, LWQ; Pamungkas, D.
2018-04-01
Rapid and reliable information in slew bearing maintenance is not trivial issue. This paper presents the online monitoring system to assist maintenance engineer in order to monitor the bearing condition of low speed slew bearing in sheet metal company. The system is able to pass the vibration information from the place where the bearing and accelerometer sensors are attached to the data center; and from the data center it can be access by opening the online monitoring website from any place and by any person. The online monitoring system is built using some programming languages such as C language, MATLAB, PHP, HTML and CSS. Generally, the flow process is start with the automatic vibration data acquisition; then features are calculated from the acquired vibration data. These features are then sent to the data center; and form the data center, the vibration features can be seen through the online monitoring website. This online monitoring system has been successfully applied in School of Mechanical, Materials and Mechatronic Engineering, University of Wollongong.
NASA Astrophysics Data System (ADS)
Chowdhury, Aritra; Sevinsky, Christopher J.; Santamaria-Pang, Alberto; Yener, Bülent
2017-03-01
The cancer diagnostic workflow is typically performed by highly specialized and trained pathologists, for which analysis is expensive both in terms of time and money. This work focuses on grade classification in colon cancer. The analysis is performed over 3 protein markers; namely E-cadherin, beta actin and colagenIV. In addition, we also use a virtual Hematoxylin and Eosin (HE) stain. This study involves a comparison of various ways in which we can manipulate the information over the 4 different images of the tissue samples and come up with a coherent and unified response based on the data at our disposal. Pre- trained convolutional neural networks (CNNs) is the method of choice for feature extraction. The AlexNet architecture trained on the ImageNet database is used for this purpose. We extract a 4096 dimensional feature vector corresponding to the 6th layer in the network. Linear SVM is used to classify the data. The information from the 4 different images pertaining to a particular tissue sample; are combined using the following techniques: soft voting, hard voting, multiplication, addition, linear combination, concatenation and multi-channel feature extraction. We observe that we obtain better results in general than when we use a linear combination of the feature representations. We use 5-fold cross validation to perform the experiments. The best results are obtained when the various features are linearly combined together resulting in a mean accuracy of 91.27%.
Bennett, Marc P.; Meulders, Ann; Baeyens, Frank; Vlaeyen, Johan W. S.
2015-01-01
Patients with chronic pain are often fearful of movements that never featured in painful episodes. This study examined whether a neutral movement’s conceptual relationship with pain-relevant stimuli could precipitate pain-related fear; a process known as symbolic generalization. As a secondary objective, we also compared experiential and verbal fear learning in the generalization of pain-related fear. We conducted an experimental study with 80 healthy participants who were recruited through an online experimental management system (Mage = 23.04 years, SD = 6.80 years). First, two artificial categories were established wherein nonsense words and joystick arm movements were equivalent. Using a between-groups design, nonsense words from one category were paired with either an electrocutaneous stimulus (pain-US) or threatening information, while nonsense words from the other category were paired with no pain-US or safety information. During a final testing phase, participants were prompted to perform specific joystick arm movements that were never followed by a pain-US, although they were informed that it could occur. The results showed that movements equivalent to the pain-relevant nonsense words evoked heightened pain-related fear as measured by pain-US expectancy, fear of pain, and unpleasantness ratings. Also, experience with the pain-US evinced stronger acquisition and generalization compared to experience with threatening information. The clinical importance and theoretical implications of these findings are discussed. PMID:25983704
Automatic movie skimming with general tempo analysis
NASA Astrophysics Data System (ADS)
Lee, Shih-Hung; Yeh, Chia-Hung; Kuo, C. C. J.
2003-11-01
Story units are extracted by general tempo analysis including tempos analysis including tempos of audio and visual information in this research. Although many schemes have been proposed to successfully segment video data into shots using basic low-level features, how to group shots into meaningful units called story units is still a challenging problem. By focusing on a certain type of video such as sport or news, we can explore models with the specific application domain knowledge. For movie contents, many heuristic rules based on audiovisual clues have been proposed with limited success. We propose a method to extract story units using general tempo analysis. Experimental results are given to demonstrate the feasibility and efficiency of the proposed technique.
Mobile Apps for Bipolar Disorder: A Systematic Review of Features and Content Quality.
Nicholas, Jennifer; Larsen, Mark Erik; Proudfoot, Judith; Christensen, Helen
2015-08-17
With continued increases in smartphone ownership, researchers and clinicians are investigating the use of this technology to enhance the management of chronic illnesses such as bipolar disorder (BD). Smartphones can be used to deliver interventions and psychoeducation, supplement treatment, and enhance therapeutic reach in BD, as apps are cost-effective, accessible, anonymous, and convenient. While the evidence-based development of BD apps is in its infancy, there has been an explosion of publicly available apps. However, the opportunity for mHealth to assist in the self-management of BD is only feasible if apps are of appropriate quality. Our aim was to identify the types of apps currently available for BD in the Google Play and iOS stores and to assess their features and the quality of their content. A systematic review framework was applied to the search, screening, and assessment of apps. We searched the Australian Google Play and iOS stores for English-language apps developed for people with BD. The comprehensiveness and quality of information was assessed against core psychoeducation principles and current BD treatment guidelines. Management tools were evaluated with reference to the best-practice resources for the specific area. General app features, and privacy and security were also assessed. Of the 571 apps identified, 82 were included in the review. Of these, 32 apps provided information and the remaining 50 were management tools including screening and assessment (n=10), symptom monitoring (n=35), community support (n=4), and treatment (n=1). Not even a quarter of apps (18/82, 22%) addressed privacy and security by providing a privacy policy. Overall, apps providing information covered a third (4/11, 36%) of the core psychoeducation principles and even fewer (2/13, 15%) best-practice guidelines. Only a third (10/32, 31%) cited their information source. Neither comprehensiveness of psychoeducation information (r=-.11, P=.80) nor adherence to best-practice guidelines (r=-.02, P=.96) were significantly correlated with average user ratings. Symptom monitoring apps generally failed to monitor critical information such as medication (20/35, 57%) and sleep (18/35, 51%), and the majority of self-assessment apps did not use validated screening measures (6/10, 60%). In general, the content of currently available apps for BD is not in line with practice guidelines or established self-management principles. Apps also fail to provide important information to help users assess their quality, with most lacking source citation and a privacy policy. Therefore, both consumers and clinicians should exercise caution with app selection. While mHealth offers great opportunities for the development of quality evidence-based mobile interventions, new frameworks for mobile mental health research are needed to ensure the timely availability of evidence-based apps to the public.
NASA Astrophysics Data System (ADS)
Jiang, Jie; Zhang, Shumei; Cao, Shixiang
2015-01-01
Multitemporal remote sensing images generally suffer from background variations, which significantly disrupt traditional region feature and descriptor abstracts, especially between pre and postdisasters, making registration by local features unreliable. Because shapes hold relatively stable information, a rotation and scale invariant shape context based on multiscale edge features is proposed. A multiscale morphological operator is adapted to detect edges of shapes, and an equivalent difference of Gaussian scale space is built to detect local scale invariant feature points along the detected edges. Then, a rotation invariant shape context with improved distance discrimination serves as a feature descriptor. For a distance shape context, a self-adaptive threshold (SAT) distance division coordinate system is proposed, which improves the discriminative property of the feature descriptor in mid-long pixel distances from the central point while maintaining it in shorter ones. To achieve rotation invariance, the magnitude of Fourier transform in one-dimension is applied to calculate angle shape context. Finally, the residual error is evaluated after obtaining thin-plate spline transformation between reference and sensed images. Experimental results demonstrate the robustness, efficiency, and accuracy of this automatic algorithm.
NASA Astrophysics Data System (ADS)
Wang, Ke; Guo, Ping; Luo, A.-Li
2017-03-01
Spectral feature extraction is a crucial procedure in automated spectral analysis. This procedure starts from the spectral data and produces informative and non-redundant features, facilitating the subsequent automated processing and analysis with machine-learning and data-mining techniques. In this paper, we present a new automated feature extraction method for astronomical spectra, with application in spectral classification and defective spectra recovery. The basic idea of our approach is to train a deep neural network to extract features of spectra with different levels of abstraction in different layers. The deep neural network is trained with a fast layer-wise learning algorithm in an analytical way without any iterative optimization procedure. We evaluate the performance of the proposed scheme on real-world spectral data. The results demonstrate that our method is superior regarding its comprehensive performance, and the computational cost is significantly lower than that for other methods. The proposed method can be regarded as a new valid alternative general-purpose feature extraction method for various tasks in spectral data analysis.
Wang, Jiaxin; Liang, Yanchun; Wang, Yan; Cui, Juan; Liu, Ming; Du, Wei; Xu, Ying
2013-01-01
Proteins can move from blood circulation into salivary glands through active transportation, passive diffusion or ultrafiltration, some of which are then released into saliva and hence can potentially serve as biomarkers for diseases if accurately identified. We present a novel computational method for predicting salivary proteins that come from circulation. The basis for the prediction is a set of physiochemical and sequence features we found to be discerning between human proteins known to be movable from circulation to saliva and proteins deemed to be not in saliva. A classifier was trained based on these features using a support-vector machine to predict protein secretion into saliva. The classifier achieved 88.56% average recall and 90.76% average precision in 10-fold cross-validation on the training data, indicating that the selected features are informative. Considering the possibility that our negative training data may not be highly reliable (i.e., proteins predicted to be not in saliva), we have also trained a ranking method, aiming to rank the known salivary proteins from circulation as the highest among the proteins in the general background, based on the same features. This prediction capability can be used to predict potential biomarker proteins for specific human diseases when coupled with the information of differentially expressed proteins in diseased versus healthy control tissues and a prediction capability for blood-secretory proteins. Using such integrated information, we predicted 31 candidate biomarker proteins in saliva for breast cancer.
Wang, Jiaxin; Liang, Yanchun; Wang, Yan; Cui, Juan; Liu, Ming; Du, Wei; Xu, Ying
2013-01-01
Proteins can move from blood circulation into salivary glands through active transportation, passive diffusion or ultrafiltration, some of which are then released into saliva and hence can potentially serve as biomarkers for diseases if accurately identified. We present a novel computational method for predicting salivary proteins that come from circulation. The basis for the prediction is a set of physiochemical and sequence features we found to be discerning between human proteins known to be movable from circulation to saliva and proteins deemed to be not in saliva. A classifier was trained based on these features using a support-vector machine to predict protein secretion into saliva. The classifier achieved 88.56% average recall and 90.76% average precision in 10-fold cross-validation on the training data, indicating that the selected features are informative. Considering the possibility that our negative training data may not be highly reliable (i.e., proteins predicted to be not in saliva), we have also trained a ranking method, aiming to rank the known salivary proteins from circulation as the highest among the proteins in the general background, based on the same features. This prediction capability can be used to predict potential biomarker proteins for specific human diseases when coupled with the information of differentially expressed proteins in diseased versus healthy control tissues and a prediction capability for blood-secretory proteins. Using such integrated information, we predicted 31 candidate biomarker proteins in saliva for breast cancer. PMID:24324552
Featureless classification of light curves
NASA Astrophysics Data System (ADS)
Kügler, S. D.; Gianniotis, N.; Polsterer, K. L.
2015-08-01
In the era of rapidly increasing amounts of time series data, classification of variable objects has become the main objective of time-domain astronomy. Classification of irregularly sampled time series is particularly difficult because the data cannot be represented naturally as a vector which can be directly fed into a classifier. In the literature, various statistical features serve as vector representations. In this work, we represent time series by a density model. The density model captures all the information available, including measurement errors. Hence, we view this model as a generalization to the static features which directly can be derived, e.g. as moments from the density. Similarity between each pair of time series is quantified by the distance between their respective models. Classification is performed on the obtained distance matrix. In the numerical experiments, we use data from the OGLE (Optical Gravitational Lensing Experiment) and ASAS (All Sky Automated Survey) surveys and demonstrate that the proposed representation performs up to par with the best currently used feature-based approaches. The density representation preserves all static information present in the observational data, in contrast to a less-complete description by features. The density representation is an upper boundary in terms of information made available to the classifier. Consequently, the predictive power of the proposed classification depends on the choice of similarity measure and classifier, only. Due to its principled nature, we advocate that this new approach of representing time series has potential in tasks beyond classification, e.g. unsupervised learning.
Beutel, Bryan G; Cardone, Dennis A
2014-10-01
Due to limited regulation of websites, the quality and content of online health-related information has been questioned as prior studies have shown that websites often misrepresent orthopaedic conditions and treatments. Kinesio tape has gained popularity among athletes and the general public despite limited evidence supporting its efficacy. The primary objective of this study was to assess the quality and content of Internet-based information on Kinesio taping. An Internet search using the terms "Kinesio tape" and "kinesiology tape" was performed using the Google search engine. Websites returned within the first two pages of results, as well as hyperlinks embedded within these sites, were included in the study. These sites were subsequently classified by type. The quality of the website was determined by the Health On the Net (HON) score, an objective metric based upon recommendations from the United Nations for the ethical representation of health information. A content analysis was performed by noting specific misleading versus balanced features in each website. A total of 31 unique websites were identified. The majority of the websites (71%) were commercial. Out of a total possible 16 points, the mean HON score among the websites was 8.9 points (SD 2.2 points). The number of misleading features was significantly higher than the balanced features (p < 0.001). Fifty-eight percent of sites used anecdotal testimonials to promote the product. Only small percentages of websites discussed complications, alternatives, or provided accurate medical outcomes. Overall, commercial sites had a greater number of misleading features compared to non-commercial sites (p = 0.01). Websites discussing Kinesio tape are predominantly of poor quality and present misleading, imbalanced information. It is of ever-increasing importance that healthcare providers work to ensure that reliable, balanced, and accurate information be available to Internet users. IV.
Botsis, T.; Woo, E. J.; Ball, R.
2013-01-01
Background We previously demonstrated that a general purpose text mining system, the Vaccine adverse event Text Mining (VaeTM) system, could be used to automatically classify reports of an-aphylaxis for post-marketing safety surveillance of vaccines. Objective To evaluate the ability of VaeTM to classify reports to the Vaccine Adverse Event Reporting System (VAERS) of possible Guillain-Barré Syndrome (GBS). Methods We used VaeTM to extract the key diagnostic features from the text of reports in VAERS. Then, we applied the Brighton Collaboration (BC) case definition for GBS, and an information retrieval strategy (i.e. the vector space model) to quantify the specific information that is included in the key features extracted by VaeTM and compared it with the encoded information that is already stored in VAERS as Medical Dictionary for Regulatory Activities (MedDRA) Preferred Terms (PTs). We also evaluated the contribution of the primary (diagnosis and cause of death) and secondary (second level diagnosis and symptoms) diagnostic VaeTM-based features to the total VaeTM-based information. Results MedDRA captured more information and better supported the classification of reports for GBS than VaeTM (AUC: 0.904 vs. 0.777); the lower performance of VaeTM is likely due to the lack of extraction by VaeTM of specific laboratory results that are included in the BC criteria for GBS. On the other hand, the VaeTM-based classification exhibited greater specificity than the MedDRA-based approach (94.96% vs. 87.65%). Most of the VaeTM-based information was contained in the secondary diagnostic features. Conclusion For GBS, clinical signs and symptoms alone are not sufficient to match MedDRA coding for purposes of case classification, but are preferred if specificity is the priority. PMID:23650490
Li, Jingchao; Cao, Yunpeng; Ying, Yulong; Li, Shuying
2016-01-01
Bearing failure is one of the dominant causes of failure and breakdowns in rotating machinery, leading to huge economic loss. Aiming at the nonstationary and nonlinear characteristics of bearing vibration signals as well as the complexity of condition-indicating information distribution in the signals, a novel rolling element bearing fault diagnosis method based on multifractal theory and gray relation theory was proposed in the paper. Firstly, a generalized multifractal dimension algorithm was developed to extract the characteristic vectors of fault features from the bearing vibration signals, which can offer more meaningful and distinguishing information reflecting different bearing health status in comparison with conventional single fractal dimension. After feature extraction by multifractal dimensions, an adaptive gray relation algorithm was applied to implement an automated bearing fault pattern recognition. The experimental results show that the proposed method can identify various bearing fault types as well as severities effectively and accurately. PMID:28036329
Li, Jingchao; Cao, Yunpeng; Ying, Yulong; Li, Shuying
2016-01-01
Bearing failure is one of the dominant causes of failure and breakdowns in rotating machinery, leading to huge economic loss. Aiming at the nonstationary and nonlinear characteristics of bearing vibration signals as well as the complexity of condition-indicating information distribution in the signals, a novel rolling element bearing fault diagnosis method based on multifractal theory and gray relation theory was proposed in the paper. Firstly, a generalized multifractal dimension algorithm was developed to extract the characteristic vectors of fault features from the bearing vibration signals, which can offer more meaningful and distinguishing information reflecting different bearing health status in comparison with conventional single fractal dimension. After feature extraction by multifractal dimensions, an adaptive gray relation algorithm was applied to implement an automated bearing fault pattern recognition. The experimental results show that the proposed method can identify various bearing fault types as well as severities effectively and accurately.
Dual Low-Rank Pursuit: Learning Salient Features for Saliency Detection.
Lang, Congyan; Feng, Jiashi; Feng, Songhe; Wang, Jingdong; Yan, Shuicheng
2016-06-01
Saliency detection is an important procedure for machines to understand visual world as humans do. In this paper, we consider a specific saliency detection problem of predicting human eye fixations when they freely view natural images, and propose a novel dual low-rank pursuit (DLRP) method. DLRP learns saliency-aware feature transformations by utilizing available supervision information and constructs discriminative bases for effectively detecting human fixation points under the popular low-rank and sparsity-pursuit framework. Benefiting from the embedded high-level information in the supervised learning process, DLRP is able to predict fixations accurately without performing the expensive object segmentation as in the previous works. Comprehensive experiments clearly show the superiority of the proposed DLRP method over the established state-of-the-art methods. We also empirically demonstrate that DLRP provides stronger generalization performance across different data sets and inherits the advantages of both the bottom-up- and top-down-based saliency detection methods.
General Anesthetics and Molecular Mechanisms of Unconsciousness
Forman, Stuart A.; Chin, Victor A.
2013-01-01
General anesthetic agents are unique in clinical medicine, because they are the only drugs used to produce unconsciousness as a therapeutic goal. In contrast to older hypotheses that assumed all general anesthetics produce their central nervous system effects through a common mechanism, we outline evidence that general anesthesia represents a number of distinct pharmacological effects that are likely mediated by different neural circuits, and perhaps via different molecular targets. Within the context of this neurobiological framework, we review recent molecular pharmacological and transgenic animal studies. These studies reveal that different groups of general anesthetics, which can be discerned based on their clinical features, produce unconsciousness via distinct molecular targets and therefore via distinct mechanisms. We further postulate that different types of general anesthetics selectively disrupt different critical steps (perhaps in different neuronal circuits) in the processing of sensory information and memory that results in consciousness. PMID:18617817
Crone, Anthony J.; Wheeler, Russell L.
2000-01-01
The USGS is currently leading an effort to compile published geological information on Quaternary faults, folds, and earthquake-induced liquefaction in order to develop an internally consistent database on the locations, ages, and activity rates of major earthquake-related features throughout the United States. This report is the compilation for such features in the Central and Eastern United States (CEUS), which for the purposes of the compilation, is defined as the region extending from the Rocky Mountain Front eastward to the Atlantic seaboard. A key objective of this national compilation is to provide a comprehensive database of Quaternary features that might generate strong ground motion and therefore, should be considered in assessing the seismic hazard throughout the country. In addition to printed versions of regional and individual state compilations, the database will be available on the World-Wide Web, where it will be readily available to everyone. The primary purpose of these compilations and the derivative database is to provide a comprehensive, uniform source of geological information that can by used to complement the other types of data that are used in seismic-hazard assessments. Within our CEUS study area, which encompasses more than 60 percent of the continuous U.S., we summarize the geological information on 69 features that are categorized into four classes (Class A, B, C, and D) based on what is known about the feature's Quaternary activity. The CEUS contains only 13 features of tectonic origin for which there is convincing evidence of Quaternary activity (Class A features). Of the remaining 56 features, 11 require further study in order to confidently define their potential as possible sources of earthquake-induced ground motion (Class B), whereas the remaining features either lack convincing geologic evidence of Quaternary tectonic faulting or have been studied carefully enough to determine that they do not pose a significant seismic hazard (Classes C and D). The correlation between historical seismicity and Quaternary faults and liquefaction features in the CEUS is generally poor, which probably reflects the long return times between successive movements on individual structures. Some Quaternary faults and liquefaction features are located in aseismic areas or where historical seismicity is sparse. These relations indicate that the record of historical seismicity does not identify all potential seismic sources in the CEUS. Furthermore, geological studies of some currently aseismic faults have shown that the faults have generated strong earthquakes in the geologically recent past. Thus, the combination of geological information and seismological data can provide better insight into potential earthquake sources and thereby, contribute to better, more comprehensive seismic-hazard assessments.
Dynamical information encoding in neural adaptation.
Luozheng Li; Wenhao Zhang; Yuanyuan Mi; Dahui Wang; Xiaohan Lin; Si Wu
2016-08-01
Adaptation refers to the general phenomenon that a neural system dynamically adjusts its response property according to the statistics of external inputs. In response to a prolonged constant stimulation, neuronal firing rates always first increase dramatically at the onset of the stimulation; and afterwards, they decrease rapidly to a low level close to background activity. This attenuation of neural activity seems to be contradictory to our experience that we can still sense the stimulus after the neural system is adapted. Thus, it prompts a question: where is the stimulus information encoded during the adaptation? Here, we investigate a computational model in which the neural system employs a dynamical encoding strategy during the neural adaptation: at the early stage of the adaptation, the stimulus information is mainly encoded in the strong independent firings; and as time goes on, the information is shifted into the weak but concerted responses of neurons. We find that short-term plasticity, a general feature of synapses, provides a natural mechanism to achieve this goal. Furthermore, we demonstrate that with balanced excitatory and inhibitory inputs, this correlation-based information can be read out efficiently. The implications of this study on our understanding of neural information encoding are discussed.
The hormesis database: the occurrence of hormetic dose responses in the toxicological literature.
Calabrese, Edward J; Blain, Robyn B
2011-10-01
In 2005 we published an assessment of dose responses that satisfied a priori evaluative criteria for inclusion within the relational retrieval hormesis database (Calabrese and Blain, 2005). The database included information on study characteristics (e.g., biological model, gender, age and other relevant aspects, number of doses, dose distribution/range, quantitative features of the dose response, temporal features/repeat measures, and physical/chemical properties of the agents). The 2005 article covered information for about 5000 dose responses; the present article has been expanded to cover approximately 9000 dose responses. This assessment extends and strengthens the conclusion of the 2005 paper that the hormesis concept is broadly generalizable, being independent of biological model, endpoint measured and chemical class/physical agent. It also confirmed the definable quantitative features of hormetic dose responses in which the strong majority of dose responses display maximum stimulation less than twice that of the control group and a stimulatory width that is within approximately 10-20-fold of the estimated toxicological or pharmacological threshold. The remarkable consistency of the quantitative features of the hormetic dose response suggests that hormesis may provide an estimate of biological plasticity that is broadly generalized across plant, microbial and animal (invertebrate and vertebrate) models. Copyright © 2011 Elsevier Inc. All rights reserved.
Cheng, Wei; Ji, Xiaoxi; Zhang, Jie; Feng, Jianfeng
2012-01-01
Accurate classification or prediction of the brain state across individual subject, i.e., healthy, or with brain disorders, is generally a more difficult task than merely finding group differences. The former must be approached with highly informative and sensitive biomarkers as well as effective pattern classification/feature selection approaches. In this paper, we propose a systematic methodology to discriminate attention deficit hyperactivity disorder (ADHD) patients from healthy controls on the individual level. Multiple neuroimaging markers that are proved to be sensitive features are identified, which include multiscale characteristics extracted from blood oxygenation level dependent (BOLD) signals, such as regional homogeneity (ReHo) and amplitude of low-frequency fluctuations. Functional connectivity derived from Pearson, partial, and spatial correlation is also utilized to reflect the abnormal patterns of functional integration, or, dysconnectivity syndromes in the brain. These neuroimaging markers are calculated on either voxel or regional level. Advanced feature selection approach is then designed, including a brain-wise association study (BWAS). Using identified features and proper feature integration, a support vector machine (SVM) classifier can achieve a cross-validated classification accuracy of 76.15% across individuals from a large dataset consisting of 141 healthy controls and 98 ADHD patients, with the sensitivity being 63.27% and the specificity being 85.11%. Our results show that the most discriminative features for classification are primarily associated with the frontal and cerebellar regions. The proposed methodology is expected to improve clinical diagnosis and evaluation of treatment for ADHD patient, and to have wider applications in diagnosis of general neuropsychiatric disorders. PMID:22888314
NASA Astrophysics Data System (ADS)
Liu, Zhi-Hao; Chen, Han-Wu
2018-02-01
As we know, the information leakage problem should be avoided in a secure quantum communication protocol. Unfortunately, it is found that this problem does exist in the large payload bidirectional quantum secure direct communication (BQSDC) protocol (Ye Int. J. Quantum. Inf. 11(5), 1350051 2013) which is based on entanglement swapping between any two Greenberger-Horne-Zeilinger (GHZ) states. To be specific, one half of the information interchanged in this protocol is leaked out unconsciously without any active attack from an eavesdropper. Afterward, this BQSDC protocol is revised to the one without information leakage. It is shown that the improved BQSDC protocol is secure against the general individual attack and has some obvious features compared with the original one.
Using enterprise architecture artefacts in an organisation
NASA Astrophysics Data System (ADS)
Niemi, Eetu; Pekkola, Samuli
2017-03-01
As a tool for management and planning, Enterprise Architecture (EA) can potentially align organisations' business processes, information, information systems and technology towards a common goal, and supply the information required within this journey. However, an explicit view on why, how, when and by whom EA artefacts are used in order to realise its full potential is not defined. Utilising the features of information systems use studies and data from a case study with 14 EA stakeholder interviews, we identify and describe 15 EA artefact use situations that are then reflected in the related literature. Their analysis enriches understanding of what are EA artefacts, how and why they are used and when are they used, and results in a theoretical framework for understanding their use in general.
Principal polynomial analysis.
Laparra, Valero; Jiménez, Sandra; Tuia, Devis; Camps-Valls, Gustau; Malo, Jesus
2014-11-01
This paper presents a new framework for manifold learning based on a sequence of principal polynomials that capture the possibly nonlinear nature of the data. The proposed Principal Polynomial Analysis (PPA) generalizes PCA by modeling the directions of maximal variance by means of curves, instead of straight lines. Contrarily to previous approaches, PPA reduces to performing simple univariate regressions, which makes it computationally feasible and robust. Moreover, PPA shows a number of interesting analytical properties. First, PPA is a volume-preserving map, which in turn guarantees the existence of the inverse. Second, such an inverse can be obtained in closed form. Invertibility is an important advantage over other learning methods, because it permits to understand the identified features in the input domain where the data has physical meaning. Moreover, it allows to evaluate the performance of dimensionality reduction in sensible (input-domain) units. Volume preservation also allows an easy computation of information theoretic quantities, such as the reduction in multi-information after the transform. Third, the analytical nature of PPA leads to a clear geometrical interpretation of the manifold: it allows the computation of Frenet-Serret frames (local features) and of generalized curvatures at any point of the space. And fourth, the analytical Jacobian allows the computation of the metric induced by the data, thus generalizing the Mahalanobis distance. These properties are demonstrated theoretically and illustrated experimentally. The performance of PPA is evaluated in dimensionality and redundancy reduction, in both synthetic and real datasets from the UCI repository.
Integrated Computational System for Aerodynamic Steering and Visualization
NASA Technical Reports Server (NTRS)
Hesselink, Lambertus
1999-01-01
In February of 1994, an effort from the Fluid Dynamics and Information Sciences Divisions at NASA Ames Research Center with McDonnel Douglas Aerospace Company and Stanford University was initiated to develop, demonstrate, validate and disseminate automated software for numerical aerodynamic simulation. The goal of the initiative was to develop a tri-discipline approach encompassing CFD, Intelligent Systems, and Automated Flow Feature Recognition to improve the utility of CFD in the design cycle. This approach would then be represented through an intelligent computational system which could accept an engineer's definition of a problem and construct an optimal and reliable CFD solution. Stanford University's role focused on developing technologies that advance visualization capabilities for analysis of CFD data, extract specific flow features useful for the design process, and compare CFD data with experimental data. During the years 1995-1997, Stanford University focused on developing techniques in the area of tensor visualization and flow feature extraction. Software libraries were created enabling feature extraction and exploration of tensor fields. As a proof of concept, a prototype system called the Integrated Computational System (ICS) was developed to demonstrate CFD design cycle. The current research effort focuses on finding a quantitative comparison of general vector fields based on topological features. Since the method relies on topological information, grid matching and vector alignment is not needed in the comparison. This is often a problem with many data comparison techniques. In addition, since only topology based information is stored and compared for each field, there is a significant compression of information that enables large databases to be quickly searched. This report will (1) briefly review the technologies developed during 1995-1997 (2) describe current technologies in the area of comparison techniques, (4) describe the theory of our new method researched during the grant year (5) summarize a few of the results and finally (6) discuss work within the last 6 months that are direct extensions from the grant.
Martin-Duque, J. F.; Godfrey, A.; Diez, A.; Cleaves, E.; Pedraza, J.; Sanz, M.A.; Carrasco, R.M.; Bodoque, J.; Brebbia, C.A.; Martin-Duque, J.F.; Wadhwa, L.C.
2002-01-01
Geo-indicators can help to assess environmental conditions in city urban and suburban areas. Those indicators should be meaningful for understanding environmental changes. From examples of Spanish and American cities, geo-indicators for assessing environmental conditions and changes in urban and suburban areas are proposed. The paper explore two types of geo-indicators. The first type presents general information that can be used to indicate the presence of a broad array of geologic conditions, either favouring or limiting various kinds of uses of the land. The second type of geo-indicator is the one most commonly used, and as a group most easily understood; these are site and problem specific and they are generally used after a problem is identified. Among them, watershed processes, seismicity and physiographic diversity are explained in more detail. A second dimension that is considered when discussing geo-indicators is the issue of scale. Broad scale investigations, covering extensive areas are only efficient at cataloguing general conditions common to much of the area or some outstanding feature within the area. This type of information is best used for policy type decisions. Detailed scale investigations can provide information about local conditions, but are not efficient at cataloguing vast areas. Information gathered at the detailed level is necessary for project design and construction.
Automatic morphological classification of galaxy images
Shamir, Lior
2009-01-01
We describe an image analysis supervised learning algorithm that can automatically classify galaxy images. The algorithm is first trained using a manually classified images of elliptical, spiral, and edge-on galaxies. A large set of image features is extracted from each image, and the most informative features are selected using Fisher scores. Test images can then be classified using a simple Weighted Nearest Neighbor rule such that the Fisher scores are used as the feature weights. Experimental results show that galaxy images from Galaxy Zoo can be classified automatically to spiral, elliptical and edge-on galaxies with accuracy of ~90% compared to classifications carried out by the author. Full compilable source code of the algorithm is available for free download, and its general-purpose nature makes it suitable for other uses that involve automatic image analysis of celestial objects. PMID:20161594
DDGui, a new and fast way to analyse DRAGON and DONJON code results
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chambon, R.; Marleau, G.
2012-07-01
With the largely increased performance of computer, the results from DRAGON and DONJON have increase in size and complexity. The scroll, copy and paste technique to get the result is not appropriate anymore. Many in-house script, software, macro have been developed to make the data gathering easier. However, the limit of these solutions is their specificity and the difficulty to export them from one place to another. A general tool usable and accessible by everyone was needed. The first bricks for a very fast and intuitive way to analyse the DRAGON and DONJON results have been put together in themore » graphic user interface DDGUI. Based on the extensive ROOT C++ package, the possible features are numerous. For this first version of the software, we have programmed the fundamental tools which may be the more useful on an everyday basis: view the data structures content, draw the geometry and draw the flux or power from a DONJON computation. The tests show how amazingly fast the user can get the information needed for a general overview or more precise analyses. Several other features will be implemented in the near feature. (authors)« less
Fuzzy set methods for object recognition in space applications
NASA Technical Reports Server (NTRS)
Keller, James M.
1991-01-01
Progress on the following tasks is reported: (1) fuzzy set-based decision making methodologies; (2) feature calculation; (3) clustering for curve and surface fitting; and (4) acquisition of images. The general structure for networks based on fuzzy set connectives which are being used for information fusion and decision making in space applications is described. The structure and training techniques for such networks consisting of generalized means and gamma-operators are described. The use of other hybrid operators in multicriteria decision making is currently being examined. Numerous classical features on image regions such as gray level statistics, edge and curve primitives, texture measures from cooccurrance matrix, and size and shape parameters were implemented. Several fractal geometric features which may have a considerable impact on characterizing cluttered background, such as clouds, dense star patterns, or some planetary surfaces, were used. A new approach to a fuzzy C-shell algorithm is addressed. NASA personnel are in the process of acquiring suitable simulation data and hopefully videotaped actual shuttle imagery. Photographs have been digitized to use in the algorithms. Also, a model of the shuttle was assembled and a mechanism to orient this model in 3-D to digitize for experiments on pose estimation is being constructed.
Mujtaba, Ghulam; Shuib, Liyana; Raj, Ram Gopal; Rajandram, Retnagowri; Shaikh, Khairunisa; Al-Garadi, Mohammed Ali
2017-01-01
Widespread implementation of electronic databases has improved the accessibility of plaintext clinical information for supplementary use. Numerous machine learning techniques, such as supervised machine learning approaches or ontology-based approaches, have been employed to obtain useful information from plaintext clinical data. This study proposes an automatic multi-class classification system to predict accident-related causes of death from plaintext autopsy reports through expert-driven feature selection with supervised automatic text classification decision models. Accident-related autopsy reports were obtained from one of the largest hospital in Kuala Lumpur. These reports belong to nine different accident-related causes of death. Master feature vector was prepared by extracting features from the collected autopsy reports by using unigram with lexical categorization. This master feature vector was used to detect cause of death [according to internal classification of disease version 10 (ICD-10) classification system] through five automated feature selection schemes, proposed expert-driven approach, five subset sizes of features, and five machine learning classifiers. Model performance was evaluated using precisionM, recallM, F-measureM, accuracy, and area under ROC curve. Four baselines were used to compare the results with the proposed system. Random forest and J48 decision models parameterized using expert-driven feature selection yielded the highest evaluation measure approaching (85% to 90%) for most metrics by using a feature subset size of 30. The proposed system also showed approximately 14% to 16% improvement in the overall accuracy compared with the existing techniques and four baselines. The proposed system is feasible and practical to use for automatic classification of ICD-10-related cause of death from autopsy reports. The proposed system assists pathologists to accurately and rapidly determine underlying cause of death based on autopsy findings. Furthermore, the proposed expert-driven feature selection approach and the findings are generally applicable to other kinds of plaintext clinical reports.
Mujtaba, Ghulam; Shuib, Liyana; Raj, Ram Gopal; Rajandram, Retnagowri; Shaikh, Khairunisa; Al-Garadi, Mohammed Ali
2017-01-01
Objectives Widespread implementation of electronic databases has improved the accessibility of plaintext clinical information for supplementary use. Numerous machine learning techniques, such as supervised machine learning approaches or ontology-based approaches, have been employed to obtain useful information from plaintext clinical data. This study proposes an automatic multi-class classification system to predict accident-related causes of death from plaintext autopsy reports through expert-driven feature selection with supervised automatic text classification decision models. Methods Accident-related autopsy reports were obtained from one of the largest hospital in Kuala Lumpur. These reports belong to nine different accident-related causes of death. Master feature vector was prepared by extracting features from the collected autopsy reports by using unigram with lexical categorization. This master feature vector was used to detect cause of death [according to internal classification of disease version 10 (ICD-10) classification system] through five automated feature selection schemes, proposed expert-driven approach, five subset sizes of features, and five machine learning classifiers. Model performance was evaluated using precisionM, recallM, F-measureM, accuracy, and area under ROC curve. Four baselines were used to compare the results with the proposed system. Results Random forest and J48 decision models parameterized using expert-driven feature selection yielded the highest evaluation measure approaching (85% to 90%) for most metrics by using a feature subset size of 30. The proposed system also showed approximately 14% to 16% improvement in the overall accuracy compared with the existing techniques and four baselines. Conclusion The proposed system is feasible and practical to use for automatic classification of ICD-10-related cause of death from autopsy reports. The proposed system assists pathologists to accurately and rapidly determine underlying cause of death based on autopsy findings. Furthermore, the proposed expert-driven feature selection approach and the findings are generally applicable to other kinds of plaintext clinical reports. PMID:28166263
Optimizing surveillance for livestock disease spreading through animal movements
Bajardi, Paolo; Barrat, Alain; Savini, Lara; Colizza, Vittoria
2012-01-01
The spatial propagation of many livestock infectious diseases critically depends on the animal movements among premises; so the knowledge of movement data may help us to detect, manage and control an outbreak. The identification of robust spreading features of the system is however hampered by the temporal dimension characterizing population interactions through movements. Traditional centrality measures do not provide relevant information as results strongly fluctuate in time and outbreak properties heavily depend on geotemporal initial conditions. By focusing on the case study of cattle displacements in Italy, we aim at characterizing livestock epidemics in terms of robust features useful for planning and control, to deal with temporal fluctuations, sensitivity to initial conditions and missing information during an outbreak. Through spatial disease simulations, we detect spreading paths that are stable across different initial conditions, allowing the clustering of the seeds and reducing the epidemic variability. Paths also allow us to identify premises, called sentinels, having a large probability of being infected and providing critical information on the outbreak origin, as encoded in the clusters. This novel procedure provides a general framework that can be applied to specific diseases, for aiding risk assessment analysis and informing the design of optimal surveillance systems. PMID:22728387
The geometrical structure of quantum theory as a natural generalization of information geometry
NASA Astrophysics Data System (ADS)
Reginatto, Marcel
2015-01-01
Quantum mechanics has a rich geometrical structure which allows for a geometrical formulation of the theory. This formalism was introduced by Kibble and later developed by a number of other authors. The usual approach has been to start from the standard description of quantum mechanics and identify the relevant geometrical features that can be used for the reformulation of the theory. Here this procedure is inverted: the geometrical structure of quantum theory is derived from information geometry, a geometrical structure that may be considered more fundamental, and the Hilbert space of the standard formulation of quantum mechanics is constructed using geometrical quantities. This suggests that quantum theory has its roots in information geometry.
Improving semantic scene understanding using prior information
NASA Astrophysics Data System (ADS)
Laddha, Ankit; Hebert, Martial
2016-05-01
Perception for ground robot mobility requires automatic generation of descriptions of the robot's surroundings from sensor input (cameras, LADARs, etc.). Effective techniques for scene understanding have been developed, but they are generally purely bottom-up in that they rely entirely on classifying features from the input data based on learned models. In fact, perception systems for ground robots have a lot of information at their disposal from knowledge about the domain and the task. For example, a robot in urban environments might have access to approximate maps that can guide the scene interpretation process. In this paper, we explore practical ways to combine such prior information with state of the art scene understanding approaches.
Search strategies on the Internet: general and specific.
Bottrill, Krys
2004-06-01
Some of the most up-to-date information on scientific activity is to be found on the Internet; for example, on the websites of academic and other research institutions and in databases of currently funded research studies provided on the websites of funding bodies. Such information can be valuable in suggesting new approaches and techniques that could be applicable in a Three Rs context. However, the Internet is a chaotic medium, not subject to the meticulous classification and organisation of classical information resources. At the same time, Internet search engines do not match the sophistication of search systems used by database hosts. Also, although some offer relatively advanced features, user awareness of these tends to be low. Furthermore, much of the information on the Internet is not accessible to conventional search engines, giving rise to the concept of the "Invisible Web". General strategies and techniques for Internet searching are presented, together with a comparative survey of selected search engines. The question of how the Invisible Web can be accessed is discussed, as well as how to keep up-to-date with Internet content and improve searching skills.
NASA Astrophysics Data System (ADS)
Kim, H. O.; Yeom, J. M.
2014-12-01
Space-based remote sensing in agriculture is particularly relevant to issues such as global climate change, food security, and precision agriculture. Recent satellite missions have opened up new perspectives by offering high spatial resolution, various spectral properties, and fast revisit rates to the same regions. Here, we examine the utility of broadband red-edge spectral information in multispectral satellite image data for classifying paddy rice crops in South Korea. Additionally, we examine how object-based spectral features affect the classification of paddy rice growth stages. For the analysis, two seasons of RapidEye satellite image data were used. The results showed that the broadband red-edge information slightly improved the classification accuracy of the crop condition in heterogeneous paddy rice crop environments, particularly when single-season image data were used. This positive effect appeared to be offset by the multi-temporal image data. Additional texture information brought only a minor improvement or a slight decline, although it is well known to be advantageous for object-based classification in general. We conclude that broadband red-edge information derived from conventional multispectral satellite data has the potential to improve space-based crop monitoring. Because the positive or negative effects of texture features for object-based crop classification could barely be interpreted, the relationships between the textual properties and paddy rice crop parameters at the field scale should be further examined in depth.
Horváth, János; Sussman, Elyse; Winkler, István; Schröger, Erich
2011-01-01
Rare irregular sounds (deviants) embedded into a regular sound sequence have large potential to draw attention to themselves (distraction). It has been previously shown that distraction, as manifested by behavioral response delay, and the P3a and reorienting negativity (RON) event-related potentials, could be reduced when the forthcoming deviant was signaled by visual cues preceding the sounds. In the present study, we investigated the type of information used in the prevention of distraction by manipulating the information content of the visual cues preceding the sounds. Cues could signal the specific variant of the forthcoming deviant, or they could just signal that the next tone was a deviant. We found that stimulus-specific cue information was used in reducing distraction. The results also suggest that early P3a and RON index processes related to the specific deviating stimulus feature, whereas late P3a reflects a general distraction-related process. PMID:21310210
DOE Office of Scientific and Technical Information (OSTI.GOV)
Symons, Christopher T; Arel, Itamar
2011-01-01
Budgeted learning under constraints on both the amount of labeled information and the availability of features at test time pertains to a large number of real world problems. Ideas from multi-view learning, semi-supervised learning, and even active learning have applicability, but a common framework whose assumptions fit these problem spaces is non-trivial to construct. We leverage ideas from these fields based on graph regularizers to construct a robust framework for learning from labeled and unlabeled samples in multiple views that are non-independent and include features that are inaccessible at the time the model would need to be applied. We describemore » examples of applications that fit this scenario, and we provide experimental results to demonstrate the effectiveness of knowledge carryover from training-only views. As learning algorithms are applied to more complex applications, relevant information can be found in a wider variety of forms, and the relationships between these information sources are often quite complex. The assumptions that underlie most learning algorithms do not readily or realistically permit the incorporation of many of the data sources that are available, despite an implicit understanding that useful information exists in these sources. When multiple information sources are available, they are often partially redundant, highly interdependent, and contain noise as well as other information that is irrelevant to the problem under study. In this paper, we are focused on a framework whose assumptions match this reality, as well as the reality that labeled information is usually sparse. Most significantly, we are interested in a framework that can also leverage information in scenarios where many features that would be useful for learning a model are not available when the resulting model will be applied. As with constraints on labels, there are many practical limitations on the acquisition of potentially useful features. A key difference in the case of feature acquisition is that the same constraints often don't pertain to the training samples. This difference provides an opportunity to allow features that are impractical in an applied setting to nevertheless add value during the model-building process. Unfortunately, there are few machine learning frameworks built on assumptions that allow effective utilization of features that are only available at training time. In this paper we formulate a knowledge carryover framework for the budgeted learning scenario with constraints on features and labels. The approach is based on multi-view and semi-supervised learning methods that use graph-encoded regularization. Our main contributions are the following: (1) we propose and provide justification for a methodology for ensuring that changes in the graph regularizer using alternate views are performed in a manner that is target-concept specific, allowing value to be obtained from noisy views; and (2) we demonstrate how this general set-up can be used to effectively improve models by leveraging features unavailable at test time. The rest of the paper is structured as follows. In Section 2, we outline real-world problems to motivate the approach and describe relevant prior work. Section 3 describes the graph construction process and the learning methodologies that are employed. Section 4 provides preliminary discussion regarding theoretical motivation for the method. In Section 5, effectiveness of the approach is demonstrated in a series of experiments employing modified versions of two well-known semi-supervised learning algorithms. Section 6 concludes the paper.« less
NASA Technical Reports Server (NTRS)
1972-01-01
Information backing up the key features of the manipulator system concept and detailed technical information on the subsystems are presented. Space station assembly and shuttle cargo handling tasks are emphasized in the concept analysis because they involve shuttle berthing, transferring the manipulator boom between shuttle and station, station assembly, and cargo handling. Emphasis is also placed on maximizing commonality in the system areas of manipulator booms, general purpose end effectors, control and display, data processing, telemetry, dedicated computers, and control station design.
The Interaction of Spatial and Object Pathways: Evidence from Balint's Syndrome.
Robertson, L; Treisman, A; Friedman-Hill, S; Grabowecky, M
1997-05-01
An earlier report described a patient (RM) with bilateral parietal damage who showed severe binding problems between shape and color and shape and size (Friedman-Hill, Robertson, & Treisman, 1995). When shown two different-colored letters, RM reported a large number of illusory conjunctions (ICs) combining the shape of one letter with the color of the other, even when he was looking directly at one of them and had as long as 10 sec to respond. The lesions also produced severe deficits in locating and reaching for objects, and difficulty in seeing more than one object at a time, resulting in a neuropsychological diagnosis of Balint's syndrome or dorsal simultanagnosia. The pattern of deficits supported predictions of Treisman's Feature Integration Theory (FIT) that the loss of spatial information would lead to binding errors. They further suggested that the spatial information used in binding depends on intact parietal function. In the present paper we extend these findings and examine other deficits in RM that would be predicted by FIT. We show that: (1) Object individuation is impaired, making it impossible for him correctly to count more than one or two objects, even when he is aware that more are present. (2) Visual search for a target defined by a conjunction of features (requiring binding) is impaired, while the detection of a target defined by a unique feature is not. Search for the absence of a feature (0 among Qs) is also severely impaired, while search for the presence (Q among 0s) is not. Feature absence can only be detected when all the present features are bound to the nontarget items. (3) RM's deficits cannot be attributed to a general binding problem: binding errors were far more likely with simultaneous presentation where spatial information was required than with sequential presentation where time could be used as the medium for binding. (4) Selection for attention was severely impaired, whether it was based on the position of a marker or on some other feature (color). (5) Spatial information seems to exist that RM cannot access, suggesting that feature binding relies on a relatively late stage where implicit spatial information is made explicitly accessible. The data converge to support our conclusions that explicit spatial knowledge is necessary for the perception of accurately bound features, for accurate attentional selection, and for accurate and rapid search for a conjunction of features in a multiitem display. It is obviously necessary for directing attention to spatial locations, but the consequences of impairments in this ability seem also to affect object selection, object individuation, and feature integration. Thus, the functional effects of parietal damage are not limited to the spatial and attentional problems that have long been described in patients with Balint's syndrome. Damage to parietal areas also affects object perception through damage to spatial representations that are fundamental for spatial awareness.
Detection of genomic rearrangements in cucumber using genomecmp software
NASA Astrophysics Data System (ADS)
Kulawik, Maciej; Pawełkowicz, Magdalena Ewa; Wojcieszek, Michał; PlÄ der, Wojciech; Nowak, Robert M.
2017-08-01
Comparative genomic by increasing information about the genomes sequences available in the databases is a rapidly evolving science. A simple comparison of the general features of genomes such as genome size, number of genes, and chromosome number presents an entry point into comparative genomic analysis. Here we present the utility of the new tool genomecmp for finding rearrangements across the compared sequences and applications in plant comparative genomics.
NASA Astrophysics Data System (ADS)
Wang, Xiao; Burghardt, Dirk
2018-05-01
This paper presents a new strategy for the generalization of discrete area features by using stroke grouping method and polarization transportation selection. The mentioned stroke is constructed on derive of the refined proximity graph of area features, and the refinement is under the control of four constraints to meet different grouping requirements. The area features which belong to the same stroke are detected into the same group. The stroke-based strategy decomposes the generalization process into two sub-processes by judging whether the area features related to strokes or not. For the area features which belong to the same one stroke, they normally present a linear like pat-tern, and in order to preserve this kind of pattern, typification is chosen as the operator to implement the generalization work. For the remaining area features which are not related by strokes, they are still distributed randomly and discretely, and the selection is chosen to conduct the generalization operation. For the purpose of retaining their original distribution characteristic, a Polarization Transportation (PT) method is introduced to implement the selection operation. Buildings and lakes are selected as the representatives of artificial area feature and natural area feature respectively to take the experiments. The generalized results indicate that by adopting this proposed strategy, the original distribution characteristics of building and lake data can be preserved, and the visual perception is pre-served as before.
Clenbuterol toxicity: a NSW poisons information centre experience.
Brett, Jonathan; Dawson, Andrew H; Brown, Jared A
2014-03-03
To describe the epidemiology and toxicity of clenbuterol in exposures reported to the NSW Poisons Information Centre (NSWPIC). Retrospective observational study analysing data from all calls about clenbuterol exposure recorded in the NSWPIC database from 1 January 2004 to 31 December 2012. The NSWPIC coversthe Australian jurisdictions New South Wales, Tasmania and the Australian Capital Territory 24 hours a day and provides after-hours cover for the rest of Australia for 7 nights each fortnight. Total number of exposures, source of call (hospital, health care worker, member of the public), time from exposure to call, reasons for drug use, clinical features and advice given. Callers reported 63 exposures to clenbuterol, with a dramatic increase from three in 2008 to 27 in 2012. Of the 63 calls, 35 were from hospital, two from paramedics, one from general practice and 21 direct from the public. At least 53 patients (84%) required hospitalisation. The commonest reasons for use were bodybuilding and slimming. The most common features were tachycardia (24 patients), gastrointestinal disturbance (16) and tremor (11). Exposure was also associated with cardiotoxicity including one cardiac arrest in a 21-year-old man. Although a well recognised doping issue among elite athletes, clenbuterol use has spread out into the general public, especially during 2012, and should be considered in patients using bodybuilding or slimming products who present with protracted sympathomimetic features. The potential for misuse of this substance requires reconsideration of its current poison schedule registration and its availability.
How we categorize objects is related to how we remember them: The shape bias as a memory bias
Vlach, Haley A.
2016-01-01
The “shape bias” describes the phenomenon that, after a certain point in development, children and adults generalize object categories based upon shape to a greater degree than other perceptual features. The focus of research on the shape bias has been to examine the types of information that learners attend to in one moment in time. The current work takes a different approach by examining whether learners' categorical biases are related to their retention of information across time. In three experiments, children's (N = 72) and adults' (N = 240) memory performance for features of objects was examined in relation to their categorical biases. The results of these experiments demonstrated that the number of shape matches chosen during the shape bias task significantly predicted shape memory. Moreover, children and adults with a shape bias were more likely to remember the shape of objects than they were the color and size of objects. Taken together, this work suggests the development of a shape bias may engender better memory for shape information. PMID:27454236
Summary of Work for Joint Research Interchanges with DARWIN Integrated Product Team 1998
NASA Technical Reports Server (NTRS)
Hesselink, Lambertus
1999-01-01
The intent of Stanford University's SciVis group is to develop technologies that enabled comparative analysis and visualization techniques for simulated and experimental flow fields. These techniques would then be made available under the Joint Research Interchange for potential injection into the DARWIN Workspace Environment (DWE). In the past, we have focused on techniques that exploited feature based comparisons such as shock and vortex extractions. Our current research effort focuses on finding a quantitative comparison of general vector fields based on topological features. Since the method relies on topological information, grid matching and vector alignment is not needed in the comparison. This is often a problem with many data comparison techniques. In addition, since only topology based information is stored and compared for each field, there is a significant compression of information that enables large databases to be quickly searched. This report will briefly (1) describe current technologies in the area of comparison techniques, (2) will describe the theory of our new method and finally (3) summarize a few of the results.
Summary of Work for Joint Research Interchanges with DARWIN Integrated Product Team
NASA Technical Reports Server (NTRS)
Hesselink, Lambertus
1999-01-01
The intent of Stanford University's SciVis group is to develop technologies that enabled comparative analysis and visualization techniques for simulated and experimental flow fields. These techniques would then be made available un- der the Joint Research Interchange for potential injection into the DARWIN Workspace Environment (DWE). In the past, we have focused on techniques that exploited feature based comparisons such as shock and vortex extractions. Our current research effort focuses on finding a quantitative comparison of general vector fields based on topological features. Since the method relies on topological information, grid matching an@ vector alignment is not needed in the comparison. This is often a problem with many data comparison techniques. In addition, since only topology based information is stored and compared for each field, there is a significant compression of information that enables large databases to be quickly searched. This report will briefly (1) describe current technologies in the area of comparison techniques, (2) will describe the theory of our new method and finally (3) summarize a few of the results.
How we categorize objects is related to how we remember them: The shape bias as a memory bias.
Vlach, Haley A
2016-12-01
The "shape bias" describes the phenomenon that, after a certain point in development, children and adults generalize object categories based on shape to a greater degree than other perceptual features. The focus of research on the shape bias has been to examine the types of information that learners attend to in one moment in time. The current work takes a different approach by examining whether learners' categorical biases are related to their retention of information across time. In three experiments, children's (N=72) and adults' (N=240) memory performance for features of objects was examined in relation to their categorical biases. The results of these experiments demonstrated that the number of shape matches chosen during the shape bias task significantly predicted shape memory. Moreover, children and adults with a shape bias were more likely to remember the shape of objects than the color and size of objects. Taken together, this work suggests that the development of a shape bias may engender better memory for shape information. Copyright © 2016 Elsevier Inc. All rights reserved.
Layered Deposits and Pitted Terrain in the Circum Hellas Region
NASA Technical Reports Server (NTRS)
Moore, J. M.; Howard, A. D.
2005-01-01
Much of the southern highlands has been mantled since the Noachian, including a general blanket of possibly airfall-derived sediment that softens the landscape, the Electris mantle including knobby chaos in several basins, and a variety of deposits that are the subject of this study that share the common characteristics of being generally confined to basins and crater floors and that manifest irregular interior depressions. Many of these features occur in a zone surrounding Hellas. These deposits share the general characteristics of having fairly smooth, nearly planar surfaces and abrupt scarps bordering interior and marginal depressions. Despite these common characteristics, a wide range of morphologies occurs. Several end-members are discussed below. Additional information is included in the original extended abstract.
Decoding Articulatory Features from fMRI Responses in Dorsal Speech Regions.
Correia, Joao M; Jansma, Bernadette M B; Bonte, Milene
2015-11-11
The brain's circuitry for perceiving and producing speech may show a notable level of overlap that is crucial for normal development and behavior. The extent to which sensorimotor integration plays a role in speech perception remains highly controversial, however. Methodological constraints related to experimental designs and analysis methods have so far prevented the disentanglement of neural responses to acoustic versus articulatory speech features. Using a passive listening paradigm and multivariate decoding of single-trial fMRI responses to spoken syllables, we investigated brain-based generalization of articulatory features (place and manner of articulation, and voicing) beyond their acoustic (surface) form in adult human listeners. For example, we trained a classifier to discriminate place of articulation within stop syllables (e.g., /pa/ vs /ta/) and tested whether this training generalizes to fricatives (e.g., /fa/ vs /sa/). This novel approach revealed generalization of place and manner of articulation at multiple cortical levels within the dorsal auditory pathway, including auditory, sensorimotor, motor, and somatosensory regions, suggesting the representation of sensorimotor information. Additionally, generalization of voicing included the right anterior superior temporal sulcus associated with the perception of human voices as well as somatosensory regions bilaterally. Our findings highlight the close connection between brain systems for speech perception and production, and in particular, indicate the availability of articulatory codes during passive speech perception. Sensorimotor integration is central to verbal communication and provides a link between auditory signals of speech perception and motor programs of speech production. It remains highly controversial, however, to what extent the brain's speech perception system actively uses articulatory (motor), in addition to acoustic/phonetic, representations. In this study, we examine the role of articulatory representations during passive listening using carefully controlled stimuli (spoken syllables) in combination with multivariate fMRI decoding. Our approach enabled us to disentangle brain responses to acoustic and articulatory speech properties. In particular, it revealed articulatory-specific brain responses of speech at multiple cortical levels, including auditory, sensorimotor, and motor regions, suggesting the representation of sensorimotor information during passive speech perception. Copyright © 2015 the authors 0270-6474/15/3515015-11$15.00/0.
Gary, Robin H.; Wilson, Zachary D.; Archuleta, Christy-Ann M.; Thompson, Florence E.; Vrabel, Joseph
2009-01-01
During 2006-09, the U.S. Geological Survey, in cooperation with the National Atlas of the United States, produced a 1:1,000,000-scale (1:1M) hydrography dataset comprising streams and waterbodies for the entire United States, including Puerto Rico and the U.S. Virgin Islands, for inclusion in the recompiled National Atlas. This report documents the methods used to select, simplify, and refine features in the 1:100,000-scale (1:100K) (1:63,360-scale in Alaska) National Hydrography Dataset to create the national 1:1M hydrography dataset. Custom tools and semi-automated processes were created to facilitate generalization of the 1:100K National Hydrography Dataset (1:63,360-scale in Alaska) to 1:1M on the basis of existing small-scale hydrography datasets. The first step in creating the new 1:1M dataset was to address feature selection and optimal data density in the streams network. Several existing methods were evaluated. The production method that was established for selecting features for inclusion in the 1:1M dataset uses a combination of the existing attributes and network in the National Hydrography Dataset and several of the concepts from the methods evaluated. The process for creating the 1:1M waterbodies dataset required a similar approach to that used for the streams dataset. Geometric simplification of features was the next step. Stream reaches and waterbodies indicated in the feature selection process were exported as new feature classes and then simplified using a geographic information system tool. The final step was refinement of the 1:1M streams and waterbodies. Refinement was done through the use of additional geographic information system tools.
Prediction of hot spots in protein interfaces using a random forest model with hybrid features.
Wang, Lin; Liu, Zhi-Ping; Zhang, Xiang-Sun; Chen, Luonan
2012-03-01
Prediction of hot spots in protein interfaces provides crucial information for the research on protein-protein interaction and drug design. Existing machine learning methods generally judge whether a given residue is likely to be a hot spot by extracting features only from the target residue. However, hot spots usually form a small cluster of residues which are tightly packed together at the center of protein interface. With this in mind, we present a novel method to extract hybrid features which incorporate a wide range of information of the target residue and its spatially neighboring residues, i.e. the nearest contact residue in the other face (mirror-contact residue) and the nearest contact residue in the same face (intra-contact residue). We provide a novel random forest (RF) model to effectively integrate these hybrid features for predicting hot spots in protein interfaces. Our method can achieve accuracy (ACC) of 82.4% and Matthew's correlation coefficient (MCC) of 0.482 in Alanine Scanning Energetics Database, and ACC of 77.6% and MCC of 0.429 in Binding Interface Database. In a comparison study, performance of our RF model exceeds other existing methods, such as Robetta, FOLDEF, KFC, KFC2, MINERVA and HotPoint. Of our hybrid features, three physicochemical features of target residues (mass, polarizability and isoelectric point), the relative side-chain accessible surface area and the average depth index of mirror-contact residues are found to be the main discriminative features in hot spots prediction. We also confirm that hot spots tend to form large contact surface areas between two interacting proteins. Source data and code are available at: http://www.aporc.org/doc/wiki/HotSpot.
NASA Astrophysics Data System (ADS)
Ressel, Rudolf; Singha, Suman; Lehner, Susanne
2016-08-01
Arctic Sea ice monitoring has attracted increasing attention over the last few decades. Besides the scientific interest in sea ice, the operational aspect of ice charting is becoming more important due to growing navigational possibilities in an increasingly ice free Arctic. For this purpose, satellite borne SAR imagery has become an invaluable tool. In past, mostly single polarimetric datasets were investigated with supervised or unsupervised classification schemes for sea ice investigation. Despite proven sea ice classification achievements on single polarimetric data, a fully automatic, general purpose classifier for single-pol data has not been established due to large variation of sea ice manifestations and incidence angle impact. Recently, through the advent of polarimetric SAR sensors, polarimetric features have moved into the focus of ice classification research. The higher information content four polarimetric channels promises to offer greater insight into sea ice scattering mechanism and overcome some of the shortcomings of single- polarimetric classifiers. Two spatially and temporally coincident pairs of fully polarimetric acquisitions from the TerraSAR-X/TanDEM-X and RADARSAT-2 satellites are investigated. Proposed supervised classification algorithm consists of two steps: The first step comprises a feature extraction, the results of which are ingested into a neural network classifier in the second step. Based on the common coherency and covariance matrix, we extract a number of features and analyze the relevance and redundancy by means of mutual information for the purpose of sea ice classification. Coherency matrix based features which require an eigendecomposition are found to be either of low relevance or redundant to other covariance matrix based features. Among the most useful features for classification are matrix invariant based features (Geometric Intensity, Scattering Diversity, Surface Scattering Fraction).
Cascaded K-means convolutional feature learner and its application to face recognition
NASA Astrophysics Data System (ADS)
Zhou, Daoxiang; Yang, Dan; Zhang, Xiaohong; Huang, Sheng; Feng, Shu
2017-09-01
Currently, considerable efforts have been devoted to devise image representation. However, handcrafted methods need strong domain knowledge and show low generalization ability, and conventional feature learning methods require enormous training data and rich parameters tuning experience. A lightened feature learner is presented to solve these problems with application to face recognition, which shares similar topology architecture as a convolutional neural network. Our model is divided into three components: cascaded convolution filters bank learning layer, nonlinear processing layer, and feature pooling layer. Specifically, in the filters learning layer, we use K-means to learn convolution filters. Features are extracted via convoluting images with the learned filters. Afterward, in the nonlinear processing layer, hyperbolic tangent is employed to capture the nonlinear feature. In the feature pooling layer, to remove the redundancy information and incorporate the spatial layout, we exploit multilevel spatial pyramid second-order pooling technique to pool the features in subregions and concatenate them together as the final representation. Extensive experiments on four representative datasets demonstrate the effectiveness and robustness of our model to various variations, yielding competitive recognition results on extended Yale B and FERET. In addition, our method achieves the best identification performance on AR and labeled faces in the wild datasets among the comparative methods.
Stimulus information contaminates summation tests of independent neural representations of features
NASA Technical Reports Server (NTRS)
Shimozaki, Steven S.; Eckstein, Miguel P.; Abbey, Craig K.
2002-01-01
Many models of visual processing assume that visual information is analyzed into separable and independent neural codes, or features. A common psychophysical test of independent features is known as a summation study, which measures performance in a detection, discrimination, or visual search task as the number of proposed features increases. Improvement in human performance with increasing number of available features is typically attributed to the summation, or combination, of information across independent neural coding of the features. In many instances, however, increasing the number of available features also increases the stimulus information in the task, as assessed by an optimal observer that does not include the independent neural codes. In a visual search task with spatial frequency and orientation as the component features, a particular set of stimuli were chosen so that all searches had equivalent stimulus information, regardless of the number of features. In this case, human performance did not improve with increasing number of features, implying that the improvement observed with additional features may be due to stimulus information and not the combination across independent features.
Chan, Louis K H; Hayward, William G
2009-02-01
In feature integration theory (FIT; A. Treisman & S. Sato, 1990), feature detection is driven by independent dimensional modules, and other searches are driven by a master map of locations that integrates dimensional information into salience signals. Although recent theoretical models have largely abandoned this distinction, some observed results are difficult to explain in its absence. The present study measured dimension-specific performance during detection and localization, tasks that require operation of dimensional modules and the master map, respectively. Results showed a dissociation between tasks in terms of both dimension-switching costs and cross-dimension attentional capture, reflecting a dimension-specific nature for detection tasks and a dimension-general nature for localization tasks. In a feature-discrimination task, results precluded an explanation based on response mode. These results are interpreted to support FIT's postulation that different mechanisms are involved in parallel and focal attention searches. This indicates that the FIT architecture should be adopted to explain the current results and that a variety of visual attention findings can be addressed within this framework. Copyright 2009 APA, all rights reserved.
Citing geospatial feature inventories with XML manifests
NASA Astrophysics Data System (ADS)
Bose, R.; McGarva, G.
2006-12-01
Today published scientific papers include a growing number of citations for online information sources that either complement or replace printed journals and books. We anticipate this same trend for cartographic citations used in the geosciences, following advances in web mapping and geographic feature-based services. Instead of using traditional libraries to resolve citations for print material, the geospatial citation life cycle will include requesting inventories of objects or geographic features from distributed geospatial data repositories. Using a case study from the UK Ordnance Survey MasterMap database, which is illustrative of geographic object-based products in general, we propose citing inventories of geographic objects using XML feature manifests. These manifests: (1) serve as a portable listing of sets of versioned features; (2) could be used as citations within the identification portion of an international geospatial metadata standard; (3) could be incorporated into geospatial data transfer formats such as GML; but (4) can be resolved only with comprehensive, curated repositories of current and historic data. This work has implications for any researcher who foresees the need to make or resolve references to online geospatial databases.
Unsupervised feature relevance analysis applied to improve ECG heartbeat clustering.
Rodríguez-Sotelo, J L; Peluffo-Ordoñez, D; Cuesta-Frau, D; Castellanos-Domínguez, G
2012-10-01
The computer-assisted analysis of biomedical records has become an essential tool in clinical settings. However, current devices provide a growing amount of data that often exceeds the processing capacity of normal computers. As this amount of information rises, new demands for more efficient data extracting methods appear. This paper addresses the task of data mining in physiological records using a feature selection scheme. An unsupervised method based on relevance analysis is described. This scheme uses a least-squares optimization of the input feature matrix in a single iteration. The output of the algorithm is a feature weighting vector. The performance of the method was assessed using a heartbeat clustering test on real ECG records. The quantitative cluster validity measures yielded a correctly classified heartbeat rate of 98.69% (specificity), 85.88% (sensitivity) and 95.04% (general clustering performance), which is even higher than the performance achieved by other similar ECG clustering studies. The number of features was reduced on average from 100 to 18, and the temporal cost was a 43% lower than in previous ECG clustering schemes. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Hannel, Mark D.; Abdulali, Aidan; O'Brien, Michael; Grier, David G.
2018-06-01
Holograms of colloidal particles can be analyzed with the Lorenz-Mie theory of light scattering to measure individual particles' three-dimensional positions with nanometer precision while simultaneously estimating their sizes and refractive indexes. Extracting this wealth of information begins by detecting and localizing features of interest within individual holograms. Conventionally approached with heuristic algorithms, this image analysis problem can be solved faster and more generally with machine-learning techniques. We demonstrate that two popular machine-learning algorithms, cascade classifiers and deep convolutional neural networks (CNN), can solve the feature-localization problem orders of magnitude faster than current state-of-the-art techniques. Our CNN implementation localizes holographic features precisely enough to bootstrap more detailed analyses based on the Lorenz-Mie theory of light scattering. The wavelet-based Haar cascade proves to be less precise, but is so computationally efficient that it creates new opportunities for applications that emphasize speed and low cost. We demonstrate its use as a real-time targeting system for holographic optical trapping.
Wood, Benjamin A; LeBoit, Philip E
2013-08-01
To study the clinical and pathological features of cases of apparent solar purpura, with attention to the recently described phenomenon of inflammatory changes within otherwise typical lesions. We studied 95 cases diagnosed as solar purpura and identified 10 cases (10.5%) in which significant neutrophilic inflammation was present, potentially simulating a leukocytoclastic vasculitis or neutrophilic dermatosis. An additional three cases were identified in subsequent routine practice. The clinical features, including follow-up for subsequent development of vasculitis and histological features were studied. In all cases the histological features were typical of solar purpura, with the exception of inflammatory changes, typically associated with clefting of elastotic stroma. Clinical follow-up information was available for all patients and none developed subsequent evidence of a cutaneous or systemic vasculitis or neutrophilic dermatosis. Inflammatory changes appear to be more frequent in solar purpura than is generally recognised. Awareness of this histological variation and correlation with the clinical findings and evolution is important in avoiding misdiagnosis.
Image retrieval for identifying house plants
NASA Astrophysics Data System (ADS)
Kebapci, Hanife; Yanikoglu, Berrin; Unal, Gozde
2010-02-01
We present a content-based image retrieval system for plant identification which is intended for providing users with a simple method to locate information about their house plants. A plant image consists of a collection of overlapping leaves and possibly flowers, which makes the problem challenging. We studied the suitability of various well-known color, texture and shape features for this problem, as well as introducing some new ones. The features are extracted from the general plant region that is segmented from the background using the max-flow min-cut technique. Results on a database of 132 different plant images show promise (in about 72% of the queries, the correct plant image is retrieved among the top-15 results).
The economics of patient-centered care.
David, Guy; Saynisch, Philip A; Smith-McLallen, Aaron
2018-05-01
The Patient-Centered Medical Home (PCMH) is a widely-implemented model for improving primary care, emphasizing care coordination, information technology, and process improvements. However, its treatment as an undifferentiated intervention in policy evaluation obscures meaningful variation in implementation. This heterogeneity leads to contracting inefficiencies between insurers and practices and may account for mixed evidence on its success. Using a novel dataset we group practices into meaningful implementation clusters and then link these clusters with detailed patient claims data. We find implementation choice affects performance, suggesting that generally-unobserved features of primary care reorganization influence patient outcomes. Reporting these features may be valuable to insurers and their members. Copyright © 2018 Elsevier B.V. All rights reserved.
Network simulation using the simulation language for alternate modeling (SLAM 2)
NASA Technical Reports Server (NTRS)
Shen, S.; Morris, D. W.
1983-01-01
The simulation language for alternate modeling (SLAM 2) is a general purpose language that combines network, discrete event, and continuous modeling capabilities in a single language system. The efficacy of the system's network modeling is examined and discussed. Examples are given of the symbolism that is used, and an example problem and model are derived. The results are discussed in terms of the ease of programming, special features, and system limitations. The system offers many features which allow rapid model development and provides an informative standardized output. The system also has limitations which may cause undetected errors and misleading reports unless the user is aware of these programming characteristics.
Fast method for reactor and feature scale coupling in ALD and CVD
Yanguas-Gil, Angel; Elam, Jeffrey W.
2017-08-08
Transport and surface chemistry of certain deposition techniques is modeled. Methods provide a model of the transport inside nanostructures as a single-particle discrete Markov chain process. This approach decouples the complexity of the surface chemistry from the transport model, thus allowing its application under general surface chemistry conditions, including atomic layer deposition (ALD) and chemical vapor deposition (CVD). Methods provide for determination of determine statistical information of the trajectory of individual molecules, such as the average interaction time or the number of wall collisions for molecules entering the nanostructures as well as to track the relative contributions to thin-film growth of different independent reaction pathways at each point of the feature.
Automated detection of qualitative spatio-temporal features in electrocardiac activation maps.
Ironi, Liliana; Tentoni, Stefania
2007-02-01
This paper describes a piece of work aiming at the realization of a tool for the automated interpretation of electrocardiac maps. Such maps can capture a number of electrical conduction pathologies, such as arrhytmia, that can be missed by the analysis of traditional electrocardiograms. But, their introduction into the clinical practice is still far away as their interpretation requires skills that belongs to very few experts. Then, an automated interpretation tool would bridge the gap between the established research outcome and clinical practice with a consequent great impact on health care. Qualitative spatial reasoning can play a crucial role in the identification of spatio-temporal patterns and salient features that characterize the heart electrical activity. We adopted the spatial aggregation (SA) conceptual framework and an interplay of numerical and qualitative information to extract features from epicardial maps, and to make them available for reasoning tasks. Our focus is on epicardial activation isochrone maps as they are a synthetic representation of spatio-temporal aspects of the propagation of the electrical excitation. We provide a computational SA-based methodology to extract, from 3D epicardial data gathered over time, (1) the excitation wavefront structure, and (2) the salient features that characterize wavefront propagation and visually correspond to specific geometric objects. The proposed methodology provides a robust and efficient way to identify salient pieces of information in activation time maps. The hierarchical structure of the abstracted geometric objects, crucial in capturing the prominent information, facilitates the definition of general rules necessary to infer the correlation between pathophysiological patterns and wavefront structure and propagation.
Kelly, Debbie M; Bischof, Walter F
2008-10-01
We investigated how human adults orient in enclosed virtual environments, when discrete landmark information is not available and participants have to rely on geometric and featural information on the environmental surfaces. In contrast to earlier studies, where, for women, the featural information from discrete landmarks overshadowed the encoding of the geometric information, Experiment 1 showed that when featural information is conjoined with the environmental surfaces, men and women encoded both types of information. Experiment 2 showed that, although both types of information are encoded, performance in locating a goal position is better if it is close to a geometrically or featurally distinct location. Furthermore, although features are relied upon more strongly than geometry, initial experience with an environment influences the relative weighting of featural and geometric cues. Taken together, these results show that human adults use a flexible strategy for encoding spatial information.
Unraveling hadron structure with generalized parton distributions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andrei Belitsky; Anatoly Radyushkin
2004-10-01
The recently introduced generalized parton distributions have emerged as a universal tool to describe hadrons in terms of quark and gluonic degrees of freedom. They combine the features of form factors, parton densities and distribution amplitudes - the functions used for a long time in studies of hadronic structure. Generalized parton distributions are analogous to the phase-space Wigner quasi-probability function of non-relativistic quantum mechanics which encodes full information on a quantum-mechanical system. We give an extensive review of main achievements in the development of this formalism. We discuss physical interpretation and basic properties of generalized parton distributions, their modeling andmore » QCD evolution in the leading and next-to-leading orders. We describe how these functions enter a wide class of exclusive reactions, such as electro- and photo-production of photons, lepton pairs, or mesons.« less
A general CPL-AdS methodology for fixing dynamic parameters in dual environments.
Huang, De-Shuang; Jiang, Wen
2012-10-01
The algorithm of Continuous Point Location with Adaptive d-ary Search (CPL-AdS) strategy exhibits its efficiency in solving stochastic point location (SPL) problems. However, there is one bottleneck for this CPL-AdS strategy which is that, when the dimension of the feature, or the number of divided subintervals for each iteration, d is large, the decision table for elimination process is almost unavailable. On the other hand, the larger dimension of the features d can generally make this CPL-AdS strategy avoid oscillation and converge faster. This paper presents a generalized universal decision formula to solve this bottleneck problem. As a matter of fact, this decision formula has a wider usage beyond handling out this SPL problems, such as dealing with deterministic point location problems and searching data in Single Instruction Stream-Multiple Data Stream based on Concurrent Read and Exclusive Write parallel computer model. Meanwhile, we generalized the CPL-AdS strategy with an extending formula, which is capable of tracking an unknown dynamic parameter λ in both informative and deceptive environments. Furthermore, we employed different learning automata in the generalized CPL-AdS method to find out if faster learning algorithm will lead to better realization of the generalized CPL-AdS method. All of these aforementioned contributions are vitally important whether in theory or in practical applications. Finally, extensive experiments show that our proposed approaches are efficient and feasible.
12 CFR Appendix M1 to Part 226 - Generic Repayment Estimates
Code of Federal Regulations, 2010 CFR
2010-01-01
... general revolving feature that applies to balances existing before January 1, 2009; a minimum payment formula applicable to a general revolving feature that applies to balances incurred on or after January 1... general revolving feature that applies to balances incurred on or after January 1, 2009, and apply that...
NHDPlusHR: A national geospatial framework for surface-water information
Viger, Roland; Rea, Alan H.; Simley, Jeffrey D.; Hanson, Karen M.
2016-01-01
The U.S. Geological Survey is developing a new geospatial hydrographic framework for the United States, called the National Hydrography Dataset Plus High Resolution (NHDPlusHR), that integrates a diversity of the best-available information, robustly supports ongoing dataset improvements, enables hydrographic generalization to derive alternate representations of the network while maintaining feature identity, and supports modern scientific computing and Internet accessibility needs. This framework is based on the High Resolution National Hydrography Dataset, the Watershed Boundaries Dataset, and elevation from the 3-D Elevation Program, and will provide an authoritative, high precision, and attribute-rich geospatial framework for surface-water information for the United States. Using this common geospatial framework will provide a consistent basis for indexing water information in the United States, eliminate redundancy, and harmonize access to, and exchange of water information.
Who Are the Turkers? A Characterization of MTurk Workers Using the Personality Assessment Inventory.
McCredie, Morgan N; Morey, Leslie C
2018-02-01
As online data collection services such as Amazon's Mechanical Turk (MTurk) gain popularity, the quality and representativeness of such data sources have gained research attention. To date, the majority of existing studies have compared MTurk workers with undergraduate samples, localized community samples, or other Internet-based samples, and thus, there remains little known about the personality and mental health constructs of MTurk workers relative to a national representative sample. The present study addresses these limitations and broadens the scope of existing research through the use of the Personality Assessment Inventory, a multiscale, self-report questionnaire which provides information regarding data validity and personality and psychopathology features standardized against a national U.S. census-matched normative sample. Results indicate that MTurk workers generally provide high-quality data and are reasonably representative of the general population across most psychological dimensions assessed. However, several distinguishing features of MTurk workers emerged that were consistent with prior findings of such individuals, primarily involving somewhat higher negative affect and lower social engagement.
Encoding sensory and motor patterns as time-invariant trajectories in recurrent neural networks
2018-01-01
Much of the information the brain processes and stores is temporal in nature—a spoken word or a handwritten signature, for example, is defined by how it unfolds in time. However, it remains unclear how neural circuits encode complex time-varying patterns. We show that by tuning the weights of a recurrent neural network (RNN), it can recognize and then transcribe spoken digits. The model elucidates how neural dynamics in cortical networks may resolve three fundamental challenges: first, encode multiple time-varying sensory and motor patterns as stable neural trajectories; second, generalize across relevant spatial features; third, identify the same stimuli played at different speeds—we show that this temporal invariance emerges because the recurrent dynamics generate neural trajectories with appropriately modulated angular velocities. Together our results generate testable predictions as to how recurrent networks may use different mechanisms to generalize across the relevant spatial and temporal features of complex time-varying stimuli. PMID:29537963
Encoding sensory and motor patterns as time-invariant trajectories in recurrent neural networks.
Goudar, Vishwa; Buonomano, Dean V
2018-03-14
Much of the information the brain processes and stores is temporal in nature-a spoken word or a handwritten signature, for example, is defined by how it unfolds in time. However, it remains unclear how neural circuits encode complex time-varying patterns. We show that by tuning the weights of a recurrent neural network (RNN), it can recognize and then transcribe spoken digits. The model elucidates how neural dynamics in cortical networks may resolve three fundamental challenges: first, encode multiple time-varying sensory and motor patterns as stable neural trajectories; second, generalize across relevant spatial features; third, identify the same stimuli played at different speeds-we show that this temporal invariance emerges because the recurrent dynamics generate neural trajectories with appropriately modulated angular velocities. Together our results generate testable predictions as to how recurrent networks may use different mechanisms to generalize across the relevant spatial and temporal features of complex time-varying stimuli. © 2018, Goudar et al.
Road displacement model based on structural mechanics
NASA Astrophysics Data System (ADS)
Lu, Xiuqin; Guo, Qingsheng; Zhang, Yi
2006-10-01
Spatial conflict resolution is an important part of cartographic generalization, and it can deal with the problems of having too much information competing for too little space, while feature displacement is a primary operator of map generalization, which aims at resolving the spatial conflicts between neighbor objects especially road features. Considering the road object, this paper explains an idea of displacement based on structural mechanics. In view of spatial conflict problem after road symbolization, it is the buffer zones that are used to detect conflicts, then we focus on each conflicting region, with the finite element method, taking every triangular element for analysis, listing stiffness matrix, gathering system equations and calculating with iteration strategy, and we give the solution to road symbol conflicts. Being like this until all the conflicts in conflicting regions are solved, then we take the whole map into consideration again, conflicts are detected by reusing the buffer zones and solved by displacement operator, so as to all of them are handled.
Electronic nicotine delivery systems: is there a need for regulation?
Trtchounian, Anna; Talbot, Prue
2011-01-01
Electronic nicotine delivery systems (ENDS) purport to deliver nicotine to the lungs of smokers. Five brands of ENDS were evaluated for design features, accuracy and clarity of labelling and quality of instruction manuals and associated print material supplied with products or on manufacturers' websites. ENDS were purchased from online vendors and analysed for various parameters. While the basic design of ENDS was similar across brands, specific design features varied significantly. Fluid contained in cartridge reservoirs readily leaked out of most brands, and it was difficult to assemble or disassemble ENDS without touching nicotine-containing fluid. Two brands had designs that helped lessen this problem. Labelling of cartridges was very poor; labelling of some cartridge wrappers was better than labelling of cartridges. In general, packs of replacement cartridges were better labelled than the wrappers or cartridges, but most packs lacked cartridge content and warning information, and sometimes packs had confusing information. Used cartridges contained fluid, and disposal of nicotine-containing cartridges was not adequately addressed on websites or in manuals. Orders were sometimes filled incorrectly, and safety features did not always function properly. Print and internet material often contained information or made claims for which there is currently no scientific support. Design flaws, lack of adequate labelling and concerns about quality control and health issues indicate that regulators should consider removing ENDS from the market until their safety can be adequately evaluated.
Zeier, Joshua D; Newman, Joseph P
2013-08-01
As predicted by the response modulation model, psychopathic offenders are insensitive to potentially important inhibitory information when it is peripheral to their primary focus of attention. To date, the clearest tests of this hypothesis have manipulated spatial attention to cue the location of goal-relevant versus inhibitory information. However, the theory predicts a more general abnormality in selective attention. In the current study, male prisoners performed a conflict-monitoring task, which included a feature-based manipulation (i.e., color) that biased selective attention toward goal-relevant stimuli and away from inhibitory distracters on some trials but not others. Paralleling results for spatial cuing, feature-based cuing resulted in less distracter interference, particularly for participants with primary psychopathy (i.e., low anxiety). This study also investigated the moderating effect of externalizing on psychopathy. Participants high in psychopathy but low in externalizing performed similarly to primary psychopathic individuals. These results demonstrate that the abnormal selective attention associated with primary psychopathy is not limited to spatial attention but, instead, applies to diverse methods for establishing attentional focus. Furthermore, they demonstrate a novel method of investigating psychopathic subtypes using continuous analyses. PsycINFO Database Record (c) 2013 APA, all rights reserved.
Douglas, Pamela K.; Lau, Edward; Anderson, Ariana; Head, Austin; Kerr, Wesley; Wollner, Margalit; Moyer, Daniel; Li, Wei; Durnhofer, Mike; Bramen, Jennifer; Cohen, Mark S.
2013-01-01
The complex task of assessing the veracity of a statement is thought to activate uniquely distributed brain regions based on whether a subject believes or disbelieves a given assertion. In the current work, we present parallel machine learning methods for predicting a subject's decision response to a given propositional statement based on independent component (IC) features derived from EEG and fMRI data. Our results demonstrate that IC features outperformed features derived from event related spectral perturbations derived from any single spectral band, yet were similar to accuracy across all spectral bands combined. We compared our diagnostic IC spatial maps with our conventional general linear model (GLM) results, and found that informative ICs had significant spatial overlap with our GLM results, yet also revealed unique regions like amygdala that were not statistically significant in GLM analyses. Overall, these results suggest that ICs may yield a parsimonious feature set that can be used along with a decision tree structure for interpretation of features used in classifying complex cognitive processes such as belief and disbelief across both fMRI and EEG neuroimaging modalities. PMID:23914164
Illumination invariant feature point matching for high-resolution planetary remote sensing images
NASA Astrophysics Data System (ADS)
Wu, Bo; Zeng, Hai; Hu, Han
2018-03-01
Despite its success with regular close-range and remote-sensing images, the scale-invariant feature transform (SIFT) algorithm is essentially not invariant to illumination differences due to the use of gradients for feature description. In planetary remote sensing imagery, which normally lacks sufficient textural information, salient regions are generally triggered by the shadow effects of keypoints, reducing the matching performance of classical SIFT. Based on the observation of dual peaks in a histogram of the dominant orientations of SIFT keypoints, this paper proposes an illumination-invariant SIFT matching method for high-resolution planetary remote sensing images. First, as the peaks in the orientation histogram are generally aligned closely with the sub-solar azimuth angle at the time of image collection, an adaptive suppression Gaussian function is tuned to level the histogram and thereby alleviate the differences in illumination caused by a changing solar angle. Next, the suppression function is incorporated into the original SIFT procedure for obtaining feature descriptors, which are used for initial image matching. Finally, as the distribution of feature descriptors changes after anisotropic suppression, and the ratio check used for matching and outlier removal in classical SIFT may produce inferior results, this paper proposes an improved matching procedure based on cross-checking and template image matching. The experimental results for several high-resolution remote sensing images from both the Moon and Mars, with illumination differences of 20°-180°, reveal that the proposed method retrieves about 40%-60% more matches than the classical SIFT method. The proposed method is of significance for matching or co-registration of planetary remote sensing images for their synergistic use in various applications. It also has the potential to be useful for flyby and rover images by integrating with the affine invariant feature detectors.
Reding, Michael E. J.; Chorpita, Bruce F.; Lau, Anna S.; Innes-Gomberg, Debbie
2014-01-01
Evidence-based practice (EBP) attitudes were measured in a sample of Los Angeles County mental health service providers. Three types of data were collected: provider demographic characteristics, attitudes toward EBP in general, and attitudes toward specific EBPs being implemented in the county. Providers could reliably rate characteristics of specific EBPs, and these ratings differed across interventions. Preliminary implementation data indicate that appealing features of an EBP relate to the degree to which providers use it. These findings suggest that assessing EBP-specific attitudes is feasible and may offer implementation-relevant information beyond that gained solely from providers' general attitudes toward EBP. PMID:24166077
Designing a process for executing projects under an international agreement
NASA Technical Reports Server (NTRS)
Mohan, S. N.
2003-01-01
Projects executed under an international agreement require special arrangements in order to operate within confines of regulations issued by the State Department and the Commerce Department. In order to communicate enterprise-level guidance and procedural information uniformly to projects based on interpretations that carry the weight of institutional authority, a process was developed. This paper provides a script for designing processes in general, using this particular process for context. While the context is incidental, the method described is applicable to any process in general. The paper will expound on novel features utilized for dissemination of the procedural details over the Internet following such process design.
What is "Object-Oriented Programming"?
NASA Astrophysics Data System (ADS)
Stroustrup, Bjarne
"Object-Oriented Programming" and "Data Abstraction" have become very common terms. Unfortunately, few people agree on what they mean. I will offer informal definitions that appear to make sense in the context of languages like Ada, C++, Modula-2, Simula67, and Smalltalk. The general idea is to equate "support for data abstraction" with the ability to define and use new types and equate "support for object-oriented programming" with the ability to express type hierarchies. Features necessary to support these programming styles in a general purpose programming language will be discussed. The presentation centers around C++ but is not limited to facilities provided by that language.
Experimental Measurement-Device-Independent Entanglement Detection
NASA Astrophysics Data System (ADS)
Nawareg, Mohamed; Muhammad, Sadiq; Amselem, Elias; Bourennane, Mohamed
2015-02-01
Entanglement is one of the most puzzling features of quantum theory and of great importance for the new field of quantum information. The determination whether a given state is entangled or not is one of the most challenging open problems of the field. Here we report on the experimental demonstration of measurement-device-independent (MDI) entanglement detection using witness method for general two qubits photon polarization systems. In the MDI settings, there is no requirement to assume perfect implementations or neither to trust the measurement devices. This experimental demonstration can be generalized for the investigation of properties of quantum systems and for the realization of cryptography and communication protocols.
Experimental Measurement-Device-Independent Entanglement Detection
Nawareg, Mohamed; Muhammad, Sadiq; Amselem, Elias; Bourennane, Mohamed
2015-01-01
Entanglement is one of the most puzzling features of quantum theory and of great importance for the new field of quantum information. The determination whether a given state is entangled or not is one of the most challenging open problems of the field. Here we report on the experimental demonstration of measurement-device-independent (MDI) entanglement detection using witness method for general two qubits photon polarization systems. In the MDI settings, there is no requirement to assume perfect implementations or neither to trust the measurement devices. This experimental demonstration can be generalized for the investigation of properties of quantum systems and for the realization of cryptography and communication protocols. PMID:25649664
Freyhult, Eva; Cui, Yuanyuan; Nilsson, Olle; Ardell, David H
2007-10-01
There are at least 21 subfunctional classes of tRNAs in most cells that, despite a very highly conserved and compact common structure, must interact specifically with different cliques of proteins or cause grave organismal consequences. Protein recognition of specific tRNA substrates is achieved in part through class-restricted tRNA features called tRNA identity determinants. In earlier work we used TFAM, a statistical classifier of tRNA function, to show evidence of unexpectedly large diversity among bacteria in tRNA identity determinants. We also created a data reduction technique called function logos to visualize identity determinants for a given taxon. Here we show evidence that determinants for lysylated isoleucine tRNAs are not the same in Proteobacteria as in other bacterial groups including the Cyanobacteria. Consistent with this, the lysylating biosynthetic enzyme TilS lacks a C-terminal domain in Cyanobacteria that is present in Proteobacteria. We present here, using function logos, a map estimating all potential identity determinants generally operational in Cyanobacteria and Proteobacteria. To further isolate the differences in potential tRNA identity determinants between Proteobacteria and Cyanobacteria, we created two new data reduction visualizations to contrast sequence and function logos between two taxa. One, called Information Difference logos (ID logos), shows the evolutionary gain or retention of functional information associated to features in one lineage. The other, Kullback-Leibler divergence Difference logos (KLD logos), shows recruitments or shifts in the functional associations of features, especially those informative in both lineages. We used these new logos to specifically isolate and visualize the differences in potential tRNA identity determinants between Proteobacteria and Cyanobacteria. Our graphical results point to numerous differences in potential tRNA identity determinants between these groups. Although more differences in general are explained by shifts in functional association rather than gains or losses, the apparent identity differences in lysylated isoleucine tRNAs appear to have evolved through both mechanisms.
Mobile Apps for Bipolar Disorder: A Systematic Review of Features and Content Quality
Larsen, Mark Erik; Proudfoot, Judith; Christensen, Helen
2015-01-01
Background With continued increases in smartphone ownership, researchers and clinicians are investigating the use of this technology to enhance the management of chronic illnesses such as bipolar disorder (BD). Smartphones can be used to deliver interventions and psychoeducation, supplement treatment, and enhance therapeutic reach in BD, as apps are cost-effective, accessible, anonymous, and convenient. While the evidence-based development of BD apps is in its infancy, there has been an explosion of publicly available apps. However, the opportunity for mHealth to assist in the self-management of BD is only feasible if apps are of appropriate quality. Objective Our aim was to identify the types of apps currently available for BD in the Google Play and iOS stores and to assess their features and the quality of their content. Methods A systematic review framework was applied to the search, screening, and assessment of apps. We searched the Australian Google Play and iOS stores for English-language apps developed for people with BD. The comprehensiveness and quality of information was assessed against core psychoeducation principles and current BD treatment guidelines. Management tools were evaluated with reference to the best-practice resources for the specific area. General app features, and privacy and security were also assessed. Results Of the 571 apps identified, 82 were included in the review. Of these, 32 apps provided information and the remaining 50 were management tools including screening and assessment (n=10), symptom monitoring (n=35), community support (n=4), and treatment (n=1). Not even a quarter of apps (18/82, 22%) addressed privacy and security by providing a privacy policy. Overall, apps providing information covered a third (4/11, 36%) of the core psychoeducation principles and even fewer (2/13, 15%) best-practice guidelines. Only a third (10/32, 31%) cited their information source. Neither comprehensiveness of psychoeducation information (r=-.11, P=.80) nor adherence to best-practice guidelines (r=-.02, P=.96) were significantly correlated with average user ratings. Symptom monitoring apps generally failed to monitor critical information such as medication (20/35, 57%) and sleep (18/35, 51%), and the majority of self-assessment apps did not use validated screening measures (6/10, 60%). Conclusions In general, the content of currently available apps for BD is not in line with practice guidelines or established self-management principles. Apps also fail to provide important information to help users assess their quality, with most lacking source citation and a privacy policy. Therefore, both consumers and clinicians should exercise caution with app selection. While mHealth offers great opportunities for the development of quality evidence-based mobile interventions, new frameworks for mobile mental health research are needed to ensure the timely availability of evidence-based apps to the public. PMID:26283290
NASA Astrophysics Data System (ADS)
Zhou, Peng; Peng, Zhike; Chen, Shiqian; Yang, Yang; Zhang, Wenming
2018-06-01
With the development of large rotary machines for faster and more integrated performance, the condition monitoring and fault diagnosis for them are becoming more challenging. Since the time-frequency (TF) pattern of the vibration signal from the rotary machine often contains condition information and fault feature, the methods based on TF analysis have been widely-used to solve these two problems in the industrial community. This article introduces an effective non-stationary signal analysis method based on the general parameterized time-frequency transform (GPTFT). The GPTFT is achieved by inserting a rotation operator and a shift operator in the short-time Fourier transform. This method can produce a high-concentrated TF pattern with a general kernel. A multi-component instantaneous frequency (IF) extraction method is proposed based on it. The estimation for the IF of every component is accomplished by defining a spectrum concentration index (SCI). Moreover, such an IF estimation process is iteratively operated until all the components are extracted. The tests on three simulation examples and a real vibration signal demonstrate the effectiveness and superiority of our method.
The importance of internal facial features in learning new faces.
Longmore, Christopher A; Liu, Chang Hong; Young, Andrew W
2015-01-01
For familiar faces, the internal features (eyes, nose, and mouth) are known to be differentially salient for recognition compared to external features such as hairstyle. Two experiments are reported that investigate how this internal feature advantage accrues as a face becomes familiar. In Experiment 1, we tested the contribution of internal and external features to the ability to generalize from a single studied photograph to different views of the same face. A recognition advantage for the internal features over the external features was found after a change of viewpoint, whereas there was no internal feature advantage when the same image was used at study and test. In Experiment 2, we removed the most salient external feature (hairstyle) from studied photographs and looked at how this affected generalization to a novel viewpoint. Removing the hair from images of the face assisted generalization to novel viewpoints, and this was especially the case when photographs showing more than one viewpoint were studied. The results suggest that the internal features play an important role in the generalization between different images of an individual's face by enabling the viewer to detect the common identity-diagnostic elements across non-identical instances of the face.
Hoarea, Karen J; Millsc, Jane; Francis, Karen
2013-01-01
Graduate nurses in general practice became a feature of New Zealand's health care system in 2008 following an expansion of the New Entrant to Practice Programme. General practice in New Zealand comprises general practitioner business owners who employ nursing and administration staff. Practice nurses are an ageing workforce in New Zealand, it is imperative therefore to attract younger nurses into general practice. This paper reports a section of the findings from a constructivist grounded theory study which examines the use of information by practice nurses in New Zealand. Initially data were collected using the ethnographic technique of observation and field notations in one general practice. Theoretical sensitivity to the value of role models was heightened by this first phase of data collection. A total of eleven practice nurses were interviewed from six general practices. One practice nurse agreed to a second interview; five of the interviewees were new graduate nurses and the other six were experienced practice nurses. The grounded theory constructed from this research was reciprocal role modelling which comprises the following three categories, becoming willing, realising potential and becoming a better practitioner. Graduate nurses and experienced practice nurses enter into a relationship of reciprocal role modelling. Becoming willing, the first core category of this grounded theory features three sub-categories: building respectful relationships, proving yourself and discerning decision making which are reported in this paper. Findings from this study may address the reported phenomenon of 'transition shock' of newly graduated nurses in the work place.
Liu, Shengyu; Tang, Buzhou; Chen, Qingcai; Wang, Xiaolong; Fan, Xiaoming
2015-01-01
Drug name recognition (DNR) is a critical step for drug information extraction. Machine learning-based methods have been widely used for DNR with various types of features such as part-of-speech, word shape, and dictionary feature. Features used in current machine learning-based methods are usually singleton features which may be due to explosive features and a large number of noisy features when singleton features are combined into conjunction features. However, singleton features that can only capture one linguistic characteristic of a word are not sufficient to describe the information for DNR when multiple characteristics should be considered. In this study, we explore feature conjunction and feature selection for DNR, which have never been reported. We intuitively select 8 types of singleton features and combine them into conjunction features in two ways. Then, Chi-square, mutual information, and information gain are used to mine effective features. Experimental results show that feature conjunction and feature selection can improve the performance of the DNR system with a moderate number of features and our DNR system significantly outperforms the best system in the DDIExtraction 2013 challenge.
Sequential visibility-graph motifs
NASA Astrophysics Data System (ADS)
Iacovacci, Jacopo; Lacasa, Lucas
2016-04-01
Visibility algorithms transform time series into graphs and encode dynamical information in their topology, paving the way for graph-theoretical time series analysis as well as building a bridge between nonlinear dynamics and network science. In this work we introduce and study the concept of sequential visibility-graph motifs, smaller substructures of n consecutive nodes that appear with characteristic frequencies. We develop a theory to compute in an exact way the motif profiles associated with general classes of deterministic and stochastic dynamics. We find that this simple property is indeed a highly informative and computationally efficient feature capable of distinguishing among different dynamics and robust against noise contamination. We finally confirm that it can be used in practice to perform unsupervised learning, by extracting motif profiles from experimental heart-rate series and being able, accordingly, to disentangle meditative from other relaxation states. Applications of this general theory include the automatic classification and description of physical, biological, and financial time series.
Environmental statistics with S-Plus
DOE Office of Scientific and Technical Information (OSTI.GOV)
Millard, S.P.; Neerchal, N.K.
1999-12-01
The combination of easy-to-use software with easy access to a description of the statistical methods (definitions, concepts, etc.) makes this book an excellent resource. One of the major features of this book is the inclusion of general information on environmental statistical methods and examples of how to implement these methods using the statistical software package S-Plus and the add-in modules Environmental-Stats for S-Plus, S+SpatialStats, and S-Plus for ArcView.
Lessons learned in control center technologies and non-technologies
NASA Technical Reports Server (NTRS)
Hansen, Elaine R.
1991-01-01
Information is given in viewgraph form on the Solar Mesosphere Explorer (SME) Control Center and the Oculometer and Automated Space Interface System (OASIS). Topics covered include SME mission operations functions; technical and non-technical features of the SME control center; general tasks and objects within the Space Station Freedom (SSF) ground system nodes; OASIS-Real Time for the control and monitoring of of space systems and subsystems; and OASIS planning, scheduling, and PC architecture.
ERIC Educational Resources Information Center
General Accounting Office, Washington, DC. Div. of Human Resources.
This briefing report was developed to provide a Senate subcommittee with information concerning certain benefit features of the Federal Employees Health Benefits Program (FEHBP). It compares coverage for selected health benefits in the federal and private sectors for a 6-year period (1980-1985). A description of methodology states that information…
Building Facade Reconstruction by Fusing Terrestrial Laser Points and Images
Pu, Shi; Vosselman, George
2009-01-01
Laser data and optical data have a complementary nature for three dimensional feature extraction. Efficient integration of the two data sources will lead to a more reliable and automated extraction of three dimensional features. This paper presents a semiautomatic building facade reconstruction approach, which efficiently combines information from terrestrial laser point clouds and close range images. A building facade's general structure is discovered and established using the planar features from laser data. Then strong lines in images are extracted using Canny extractor and Hough transformation, and compared with current model edges for necessary improvement. Finally, textures with optimal visibility are selected and applied according to accurate image orientations. Solutions to several challenge problems throughout the collaborated reconstruction, such as referencing between laser points and multiple images and automated texturing, are described. The limitations and remaining works of this approach are also discussed. PMID:22408539
Enhancing facial features by using clear facial features
NASA Astrophysics Data System (ADS)
Rofoo, Fanar Fareed Hanna
2017-09-01
The similarity of features between individuals of same ethnicity motivated the idea of this project. The idea of this project is to extract features of clear facial image and impose them on blurred facial image of same ethnic origin as an approach to enhance a blurred facial image. A database of clear images containing 30 individuals equally divided to five different ethnicities which were Arab, African, Chines, European and Indian. Software was built to perform pre-processing on images in order to align the features of clear and blurred images. And the idea was to extract features of clear facial image or template built from clear facial images using wavelet transformation to impose them on blurred image by using reverse wavelet. The results of this approach did not come well as all the features did not align together as in most cases the eyes were aligned but the nose or mouth were not aligned. Then we decided in the next approach to deal with features separately but in the result in some cases a blocky effect was present on features due to not having close matching features. In general the available small database did not help to achieve the goal results, because of the number of available individuals. The color information and features similarity could be more investigated to achieve better results by having larger database as well as improving the process of enhancement by the availability of closer matches in each ethnicity.
The Value of Information for Populations in Varying Environments
NASA Astrophysics Data System (ADS)
Rivoire, Olivier; Leibler, Stanislas
2011-04-01
The notion of information pervades informal descriptions of biological systems, but formal treatments face the problem of defining a quantitative measure of information rooted in a concept of fitness, which is itself an elusive notion. Here, we present a model of population dynamics where this problem is amenable to a mathematical analysis. In the limit where any information about future environmental variations is common to the members of the population, our model is equivalent to known models of financial investment. In this case, the population can be interpreted as a portfolio of financial assets and previous analyses have shown that a key quantity of Shannon's communication theory, the mutual information, sets a fundamental limit on the value of information. We show that this bound can be violated when accounting for features that are irrelevant in finance but inherent to biological systems, such as the stochasticity present at the individual level. This leads us to generalize the measures of uncertainty and information usually encountered in information theory.
NASA Astrophysics Data System (ADS)
Chan, Yi-Tung; Wang, Shuenn-Jyi; Tsai, Chung-Hsien
2017-09-01
Public safety is a matter of national security and people's livelihoods. In recent years, intelligent video-surveillance systems have become important active-protection systems. A surveillance system that provides early detection and threat assessment could protect people from crowd-related disasters and ensure public safety. Image processing is commonly used to extract features, e.g., people, from a surveillance video. However, little research has been conducted on the relationship between foreground detection and feature extraction. Most current video-surveillance research has been developed for restricted environments, in which the extracted features are limited by having information from a single foreground; they do not effectively represent the diversity of crowd behavior. This paper presents a general framework based on extracting ensemble features from the foreground of a surveillance video to analyze a crowd. The proposed method can flexibly integrate different foreground-detection technologies to adapt to various monitored environments. Furthermore, the extractable representative features depend on the heterogeneous foreground data. Finally, a classification algorithm is applied to these features to automatically model crowd behavior and distinguish an abnormal event from normal patterns. The experimental results demonstrate that the proposed method's performance is both comparable to that of state-of-the-art methods and satisfies the requirements of real-time applications.
Photographic monitoring of soiling and decay of roadside walls in central Oxford, England
NASA Astrophysics Data System (ADS)
Thornbush, Mary J.; Viles, Heather A.
2008-12-01
As part of the Environmental Monitoring of Integrated Transport Strategies (EMITS) project, which examined the impact of the Oxford Transport Strategy (OTS) on the soiling and decay of buildings and structures in central Oxford, England, a simple photographic survey of a sample of roadside walls was carried out in 1997, with re-surveys in 1999 and 2003. Thirty photographs were taken each time, covering an area of stonework approximately 30 × 30 cm in dimensions at 1-1.3 m above pavement level. The resulting images have been used to investigate, both qualitatively as well as quantitatively, the progression of soiling and decay. Comparison of images by eye reveals a number of minor changes in soiling and decay patterns, but generally indicates stability except at one site where dramatic, superficial damage occurred over 2 years. Quantitative analysis of decay features (concavities resulting from surface blistering, flaking, and scaling), using simple techniques in Adobe Photoshop, shows variable pixel-based size proportions of concavities across 6 years of survey. Colour images (in Lab Color) generally have a reduced proportion of pixels, representing decay features in comparison to black and white (Grayscale) images. The study conveys that colour images provide more information both for general observations of soiling and decay patterns and for segmentation of decay-produced concavities. The study indicates that simple repeat photography can reveal useful information about changing patterns of both soiling and decay, although unavoidable variation in external lighting conditions between re-surveys is a factor limiting the accuracy of change detection.
Synoptic evaluation of scale-dependent metrics for hydrographic line feature geometry
Stanislawski, Larry V.; Buttenfield, Barbara P.; Raposo, Paulo; Cameron, Madeline; Falgout, Jeff T.
2015-01-01
Methods of acquisition and feature simplification for vector feature data impact cartographic representations and scientific investigations of these data, and are therefore important considerations for geographic information science (Haunert and Sester 2008). After initial collection, linear features may be simplified to reduce excessive detail or to furnish a reduced-scale version of the features through cartographic generalization (Regnauld and McMaster 2008, Stanislawski et al. 2014). A variety of algorithms exist to simplify linear cartographic features, and all of the methods affect the positional accuracy of the features (Shahriari and Tao 2002, Regnauld and McMaster 2008, Stanislawski et al. 2012). In general, simplification operations are controlled by one or more tolerance parameters that limit the amount of positional change the operation can make to features. Using a single tolerance value can have varying levels of positional change on features; depending on local shape, texture, or geometric characteristics of the original features (McMaster and Shea 1992, Shahriari and Tao 2002, Buttenfield et al. 2010). Consequently, numerous researchers have advocated calibration of simplification parameters to control quantifiable properties of resulting changes to the features (Li and Openshaw 1990, Raposo 2013, Tobler 1988, Veregin 2000, and Buttenfield, 1986, 1989).This research identifies relations between local topographic conditions and geometric characteristics of linear features that are available in the National Hydrography Dataset (NHD). The NHD is a comprehensive vector dataset of surface 18 th ICA Workshop on Generalisation and Multiple Representation, Rio de Janiero, Brazil 2015 2 water features within the United States that is maintained by the U.S. Geological Survey (USGS). In this paper, geometric characteristics of cartographic representations for natural stream and river features are summarized for subbasin watersheds within entire regions of the conterminous United States and compared to topographic metrics. A concurrent processing workflow is implemented using a Linux high-performance computing cluster to simultaneously process multiple subbasins, and thereby complete the work in a fraction of the time required for a single-process environment. In addition, similar metrics are generated for several levels of simplification of the hydrographic features to quantify the effects of simplification over the various landscape conditions. Objectives of this exploratory investigation are to quantify geometric characteristics of linear hydrographic features over the various terrain conditions within the conterminous United States and thereby illuminate relations between stream geomorphological conditions and cartographic representation. The synoptic view of these characteristics over regional watersheds that is afforded through concurrent processing, in conjunction with terrain conditions, may reveal patterns for classifying cartographic stream features into stream geomorphological classes. Furthermore, the synoptic measurement of the amount of change in geometric characteristics caused by the several levels of simplification can enable estimation of tolerance values that appropriately control simplification-induced geometric change of the cartographic features within the various geomorphological classes in the country. Hence, these empirically derived rules or relations could help generate multiscale-representations of features through automated generalization that adequately maintain surface drainage variations and patterns reflective of the natural stream geomorphological conditions across the country.
The predictive value of general movement tasks in assessing occupational task performance.
Frost, David M; Beach, Tyson A C; McGill, Stuart M; Callaghan, Jack P
2015-01-01
Within the context of evaluating individuals' movement behavior it is generally assumed that the tasks chosen will predict their competency to perform activities relevant to their occupation. This study sought to examine whether a battery of general tasks could be used to predict the movement patterns employed by firefighters to perform select job-specific skills. Fifty-two firefighters performed a battery of general and occupation-specific tasks that simulated the demands of firefighting. Participants' peak lumbar spine and frontal plane knee motion were compared across tasks. During 85% of all comparisons, the magnitude of spine and knee motion was greater during the general movement tasks than observed during the firefighting skills. Certain features of a worker's movement behavior may be exhibited across a range of tasks. Therefore, provided that a movement screen's tasks expose the motions of relevance for the population being tested, general evaluations could offer valuable insight into workers' movement competency or facilitate an opportunity to establish an evidence-informed intervention.
The geometrical structure of quantum theory as a natural generalization of information geometry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reginatto, Marcel
2015-01-13
Quantum mechanics has a rich geometrical structure which allows for a geometrical formulation of the theory. This formalism was introduced by Kibble and later developed by a number of other authors. The usual approach has been to start from the standard description of quantum mechanics and identify the relevant geometrical features that can be used for the reformulation of the theory. Here this procedure is inverted: the geometrical structure of quantum theory is derived from information geometry, a geometrical structure that may be considered more fundamental, and the Hilbert space of the standard formulation of quantum mechanics is constructed usingmore » geometrical quantities. This suggests that quantum theory has its roots in information geometry.« less
GFFview: A Web Server for Parsing and Visualizing Annotation Information of Eukaryotic Genome.
Deng, Feilong; Chen, Shi-Yi; Wu, Zhou-Lin; Hu, Yongsong; Jia, Xianbo; Lai, Song-Jia
2017-10-01
Owing to wide application of RNA sequencing (RNA-seq) technology, more and more eukaryotic genomes have been extensively annotated, such as the gene structure, alternative splicing, and noncoding loci. Annotation information of genome is prevalently stored as plain text in General Feature Format (GFF), which could be hundreds or thousands Mb in size. Therefore, it is a challenge for manipulating GFF file for biologists who have no bioinformatic skill. In this study, we provide a web server (GFFview) for parsing the annotation information of eukaryotic genome and then generating statistical description of six indices for visualization. GFFview is very useful for investigating quality and difference of the de novo assembled transcriptome in RNA-seq studies.
General Mode Scanning Probe Microscopy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Somnath, Suhas; Jesse, Stephen
A critical part of SPM measurements is the information transfer from the probe-sample junction to the measurement system. Current information transfer methods heavily compress the information-rich data stream by averaging the data over a time interval, or via heterodyne detection approaches such as lock-in amplifiers and phase-locked loops. As a consequence, highly valuable information at the sub-microsecond time scales or information from frequencies outside the measurement band is lost. We have developed a fundamentally new approach called General Mode (G-mode), where we can capture the complete information stream from the detectors in the microscope. The availability of the complete informationmore » allows the microscope operator to analyze the data via information-theory analysis or comprehensive physical models. Furthermore, the complete data stream enables advanced data-driven filtering algorithms, multi-resolution imaging, ultrafast spectroscropic imaging, spatial mapping of multidimensional variability in material properties, etc. Though we applied this approach to scanning probe microscopy, the general philosophy of G-mode can be applied to many other modes of microscopy. G-mode data is captured by completely custom software written in LabVIEW and Matlab. The software generates the waveforms to electrically, thermally, or mechanically excite the SPM probe. It handles real-time communications with the microscope software for operations such as moving the SPM probe position and also controls other instrumentation hardware. The software also controls multiple variants of high-speed data acquisition cards to excite the SPM probe with the excitation waveform and simultaneously measure multiple channels of information from the microscope detectors at sampling rates of 1-100 MHz. The software also saves the raw data to the computer and allows the microscope operator to visualize processed or filtered data during the experiment. The software performs all these features while offering a user-friendly interface.« less
GEOGRAPHIC NAMES INFORMATION SYSTEM (GNIS) ...
The Geographic Names Information System (GNIS), developed by the U.S. Geological Survey in cooperation with the U.S. Board on Geographic Names (BGN), contains information about physical and cultural geographic features in the United States and associated areas, both current and historical, but not including roads and highways. The database also contains geographic names in Antarctica. The database holds the Federally recognized name of each feature and defines the location of the feature by state, county, USGS topographic map, and geographic coordinates. Other feature attributes include names or spellings other than the official name, feature designations, feature class, historical and descriptive information, and for some categories of features the geometric boundaries. The database assigns a unique feature identifier, a random number, that is a key for accessing, integrating, or reconciling GNIS data with other data sets. The GNIS is our Nation's official repository of domestic geographic feature names information.
An Integrated Account of Generalization across Objects and Features
ERIC Educational Resources Information Center
Kemp, Charles; Shafto, Patrick; Tenenbaum, Joshua B.
2012-01-01
Humans routinely make inductive generalizations about unobserved features of objects. Previous accounts of inductive reasoning often focus on inferences about a single object or feature: accounts of causal reasoning often focus on a single object with one or more unobserved features, and accounts of property induction often focus on a single…
[General features of the patient-physician relationship].
Baeza, H; Bueno, G
1997-03-01
The communication between physicians and patients is often deficient. Little time is devoted to it and the patient receives scanty information with a low emotional content. Some features of our medicine can explain this situation. The rationalist and mechanistic biological model, allows to study only those things that can be undertaken with the scientific method. Psychological, social and spiritual aspects are surpassed. It only looks at material aspects of people, limiting the communication. Patients express their symptoms in an emotional way, with multiple beliefs and fears. The physician converts them to a precise, scientific, measurable and rational medical logical type. This language is not understood by patients, generating hesitancy in the communication. The paternalism is based in the power that physicians have over patients. We give knowledge and ask the patient to subordinate and accept our power. The patient loses his moral right to be informed, to ask, to have doubts or to disagree. Our personal communication is almost always formal, unemotional and with no explanations, further limiting communication.
Mapping cattle trade routes in southern Somalia: a method for mobile livestock keeping systems.
Tempia, S; Braidotti, F; Aden, H H; Abdulle, M H; Costagli, R; Otieno, F T
2010-12-01
The Somali economy is the only one in the world in which more than half the population is dependent on nomadic pastoralism. Trade typically involves drovers trekking animals over long distances to markets. A pilot approach for mapping trade routes was undertaken, using the Afmadow to Garissa routes in southern Somalia. The methodology included conducting a workshop with traders to gather preliminary information about the most-used routes and general husbandry practices and training selected drovers to collect data about key features along the routes, using hand-held global positioning system (GPS) devices, radio collar GPS and pictorial data forms. Collected data were then integrated into geographic information systems for analysis. The resultant spatial maps describe the Afmadow to Garissa routes, the speed of livestock movement along these routes and relevant environmental and social features affecting this speed. These data are useful for identifying critical control points for health screening along the routes, which may enable the establishment of a livestock certification system in nomadic pastoral environments.
Piepers, Daniel W.; Robbins, Rachel A.
2012-01-01
It is widely agreed that the human face is processed differently from other objects. However there is a lack of consensus on what is meant by a wide array of terms used to describe this “special” face processing (e.g., holistic and configural) and the perceptually relevant information within a face (e.g., relational properties and configuration). This paper will review existing models of holistic/configural processing, discuss how they differ from one another conceptually, and review the wide variety of measures used to tap into these concepts. In general we favor a model where holistic processing of a face includes some or all of the interrelations between features and has separate coding for features. However, some aspects of the model remain unclear. We propose the use of moving faces as a way of clarifying what types of information are included in the holistic representation of a face. PMID:23413184
NASA Astrophysics Data System (ADS)
Bruder, Daniel
2010-11-01
The DC Glow Discharge Exhibit is intended to demonstrate the effects a magnetic field produces on a plasma in a vacuum chamber. The display, which will be featured as a part of The Liberty Science Center's ``Energy Quest Exhibition,'' consists of a DC glow discharge tube and information panels to educate the general public on plasma and its relation to fusion energy. Wall posters and an information booklet will offer brief descriptions of fusion-based science and technology, and will portray plasma's role in the development of fusion as a viable source of energy. The display features a horse-shoe magnet on a movable track, allowing viewers to witness the effects of a magnetic field upon a plasma. The plasma is created from air within a vacuum averaging between 100-200 mTorr. Signage within the casing describes the hardware components. The display is pending delivery to The Liberty Science Center, and will replace a similar, older exhibit presently at the museum.
Hedefalk, Finn; Svensson, Patrick; Harrie, Lars
2017-01-01
This paper presents datasets that enable historical longitudinal studies of micro-level geographic factors in a rural setting. These types of datasets are new, as historical demography studies have generally failed to properly include the micro-level geographic factors. Our datasets describe the geography over five Swedish rural parishes, and by linking them to a longitudinal demographic database, we obtain a geocoded population (at the property unit level) for this area for the period 1813–1914. The population is a subset of the Scanian Economic Demographic Database (SEDD). The geographic information includes the following feature types: property units, wetlands, buildings, roads and railroads. The property units and wetlands are stored in object-lifeline time representations (information about creation, changes and ends of objects are recorded in time), whereas the other feature types are stored as snapshots in time. Thus, the datasets present one of the first opportunities to study historical spatio-temporal patterns at the micro-level. PMID:28398288
Toward intelligent information sysytem
NASA Astrophysics Data System (ADS)
Onodera, Natsuo
"Hypertext" means a concept of a novel computer-assisted tool for storage and retrieval of text information based on human association. Structure of knowledge in our idea processing is generally complicated and networked, but traditional paper documents merely express it in essentially linear and sequential forms. However, recent advances in work-station technology have allowed us to process easily electronic documents containing non-linear structure such as references or hierarchies. This paper describes concept, history and basic organization of hypertext, and shows the outline and features of existing main hypertext systems. Particularly, use of the hypertext database is illustrated by an example of Intermedia developed by Brown University.
NASA Astrophysics Data System (ADS)
Grassi, N.
2005-06-01
In the framework of the extensive study on the wood painting "Madonna dei fusi" attributed to Leonardo da Vinci, Ion Beam Analysis (IBA) techniques were used at the Florence accelerator laboratory to get information about the elemental composition of the paint layers. After a brief description of the basic principle and the general features of IBA techniques, we will illustrate in detail how the analysis allowed us to characterise the pigments of original and restored areas and the substrate composition, and to obtain information about the stratigraphy of the painting, also providing an estimate of the paint layer thickness.
Fortin, Connor H; Schulze, Katharina V; Babbitt, Gregory A
2015-01-01
It is now widely-accepted that DNA sequences defining DNA-protein interactions functionally depend upon local biophysical features of DNA backbone that are important in defining sites of binding interaction in the genome (e.g. DNA shape, charge and intrinsic dynamics). However, these physical features of DNA polymer are not directly apparent when analyzing and viewing Shannon information content calculated at single nucleobases in a traditional sequence logo plot. Thus, sequence logos plots are severely limited in that they convey no explicit information regarding the structural dynamics of DNA backbone, a feature often critical to binding specificity. We present TRX-LOGOS, an R software package and Perl wrapper code that interfaces the JASPAR database for computational regulatory genomics. TRX-LOGOS extends the traditional sequence logo plot to include Shannon information content calculated with regard to the dinucleotide-based BI-BII conformation shifts in phosphate linkages on the DNA backbone, thereby adding a visual measure of intrinsic DNA flexibility that can be critical for many DNA-protein interactions. TRX-LOGOS is available as an R graphics module offered at both SourceForge and as a download supplement at this journal. To demonstrate the general utility of TRX logo plots, we first calculated the information content for 416 Saccharomyces cerevisiae transcription factor binding sites functionally confirmed in the Yeastract database and matched to previously published yeast genomic alignments. We discovered that flanking regions contain significantly elevated information content at phosphate linkages than can be observed at nucleobases. We also examined broader transcription factor classifications defined by the JASPAR database, and discovered that many general signatures of transcription factor binding are locally more information rich at the level of DNA backbone dynamics than nucleobase sequence. We used TRX-logos in combination with MEGA 6.0 software for molecular evolutionary genetics analysis to visually compare the human Forkhead box/FOX protein evolution to its binding site evolution. We also compared the DNA binding signatures of human TP53 tumor suppressor determined by two different laboratory methods (SELEX and ChIP-seq). Further analysis of the entire yeast genome, center aligned at the start codon, also revealed a distinct sequence-independent 3 bp periodic pattern in information content, present only in coding region, and perhaps indicative of the non-random organization of the genetic code. TRX-LOGOS is useful in any situation in which important information content in DNA can be better visualized at the positions of phosphate linkages (i.e. dinucleotides) where the dynamic properties of the DNA backbone functions to facilitate DNA-protein interaction.
NASA Astrophysics Data System (ADS)
Yang, G.; Lin, Y.; Bhattacharya, P.
2007-12-01
To achieve an effective and safe operation on the machine system where the human interacts with the machine mutually, there is a need for the machine to understand the human state, especially cognitive state, when the human's operation task demands an intensive cognitive activity. Due to a well-known fact with the human being, a highly uncertain cognitive state and behavior as well as expressions or cues, the recent trend to infer the human state is to consider multimodality features of the human operator. In this paper, we present a method for multimodality inferring of human cognitive states by integrating neuro-fuzzy network and information fusion techniques. To demonstrate the effectiveness of this method, we take the driver fatigue detection as an example. The proposed method has, in particular, the following new features. First, human expressions are classified into four categories: (i) casual or contextual feature, (ii) contact feature, (iii) contactless feature, and (iv) performance feature. Second, the fuzzy neural network technique, in particular Takagi-Sugeno-Kang (TSK) model, is employed to cope with uncertain behaviors. Third, the sensor fusion technique, in particular ordered weighted aggregation (OWA), is integrated with the TSK model in such a way that cues are taken as inputs to the TSK model, and then the outputs of the TSK are fused by the OWA which gives outputs corresponding to particular cognitive states under interest (e.g., fatigue). We call this method TSK-OWA. Validation of the TSK-OWA, performed in the Northeastern University vehicle drive simulator, has shown that the proposed method is promising to be a general tool for human cognitive state inferring and a special tool for the driver fatigue detection.
The informational architecture of the cell.
Walker, Sara Imari; Kim, Hyunju; Davies, Paul C W
2016-03-13
We compare the informational architecture of biological and random networks to identify informational features that may distinguish biological networks from random. The study presented here focuses on the Boolean network model for regulation of the cell cycle of the fission yeast Schizosaccharomyces pombe. We compare calculated values of local and global information measures for the fission yeast cell cycle to the same measures as applied to two different classes of random networks: Erdös-Rényi and scale-free. We report patterns in local information processing and storage that do indeed distinguish biological from random, associated with control nodes that regulate the function of the fission yeast cell-cycle network. Conversely, we find that integrated information, which serves as a global measure of 'emergent' information processing, does not differ from random for the case presented. We discuss implications for our understanding of the informational architecture of the fission yeast cell-cycle network in particular, and more generally for illuminating any distinctive physics that may be operative in life. © 2016 The Author(s).
Lin, Kai-Qiang; Yi, Jun; Zhong, Jin-Hui; Hu, Shu; Liu, Bi-Ju; Liu, Jun-Yang; Zong, Cheng; Lei, Zhi-Chao; Wang, Xiang; Aizpurua, Javier; Esteban, Rubén; Ren, Bin
2017-01-01
Surface-enhanced Raman scattering (SERS) spectroscopy has attracted tremendous interests as a highly sensitive label-free tool. The local field produced by the excitation of localized surface plasmon resonances (LSPRs) dominates the overall enhancement of SERS. Such an electromagnetic enhancement is unfortunately accompanied by a strong modification in the relative intensity of the original Raman spectra, which highly distorts spectral features providing chemical information. Here we propose a robust method to retrieve the fingerprint of intrinsic chemical information from the SERS spectra. The method is established based on the finding that the SERS background originates from the LSPR-modulated photoluminescence, which contains the local field information shared also by SERS. We validate this concept of retrieval of intrinsic fingerprint information in well controlled single metallic nanoantennas of varying aspect ratios. We further demonstrate its unambiguity and generality in more complicated systems of tip-enhanced Raman spectroscopy (TERS) and SERS of silver nanoaggregates. PMID:28348368
Quantifying the role of online news in linking conservation research to Facebook and Twitter.
Papworth, S K; Nghiem, T P L; Chimalakonda, D; Posa, M R C; Wijedasa, L S; Bickford, D; Carrasco, L R
2015-06-01
Conservation science needs to engage the general public to ensure successful conservation interventions. Although online technologies such as Twitter and Facebook offer new opportunities to accelerate communication between conservation scientists and the online public, factors influencing the spread of conservation news in online media are not well understood. We explored transmission of conservation research through online news articles with generalized linear mixed-effects models and an information theoretic approach. In particular, we assessed differences in the frequency conservation research is featured on online news sites and the impact of online conservation news content and delivery on Facebook likes and shares and Twitter tweets. Five percent of articles in conservation journals are reported in online news, and the probability of reporting depended on the journal. There was weak evidence that articles on climate change and mammals were more likely to be featured. Online news articles about charismatic mammals with illustrations were more likely to be shared or liked on Facebook and Twitter, but the effect of news sites was much larger. These results suggest journals have the greatest impact on which conservation research is featured and that news site has the greatest impact on how popular an online article will be on Facebook and Twitter. © 2015 Society for Conservation Biology.
Bogale, Bezawork Afework; Aoyama, Masato; Sugita, Shoei
2011-01-01
We trained jungle crows to discriminate among photographs of human face according to their sex in a simultaneous two-alternative task to study their categorical learning ability. Once the crows reached a discrimination criterion (greater than or equal to 80% correct choices in two consecutive sessions; binomial probability test, p<.05), they next received generalization and transfer tests (i.e., greyscale, contour, and 'full' occlusion) in Experiment 1 followed by a 'partial' occlusion test in Experiment 2 and random stimuli pair test in Experiment 3. Jungle crows learned the discrimination task in a few trials and successfully generalized to novel stimuli sets. However, all crows failed the greyscale test and half of them the contour test. Neither occlusion of internal features of the face, nor randomly pairing of exemplars affected discrimination performance of most, if not all crows. We suggest that jungle crows categorize human face photographs based on perceptual similarities as other non-human animals do, and colour appears to be the most salient feature controlling discriminative behaviour. However, the variability in the use of facial contours among individuals suggests an exploitation of multiple features and individual differences in visual information processing among jungle crows. Copyright © 2010 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pasyanos, M
We study the lithospheric structure of Africa, Arabia and adjacent oceanic regions with fundamental-mode surface waves over a wide period range. Including short period group velocities allows us to examine shallower features than previous studies of the whole continent. In the process, we have developed a crustal thickness map of Africa. Main features include crustal thickness increases under the West African, Congo, and Kalahari cratons. We find crustal thinning under Mesozoic and Cenozoic rifts, including the Benue Trough, Red Sea, and East, Central, and West African rift systems. Crustal shear wave velocities are generally faster in oceanic regions and cratons,more » and slower in more recent crust and in active and formerly active orogenic regions. Deeper structure, related to the thickness of cratons and modern rifting, is generally consistent with previous work. Under cratons we find thick lithosphere and fast upper mantle velocities, while under rifts we find thinned lithosphere and slower upper mantle velocities. There are no consistent effects in areas classified as hotspots, indicating that there seem to be numerous origins for these features. Finally, it appears that the African Superswell has had a significantly different impact in the north and the south, indicating specifics of the feature (temperature, time of influence, etc.) to be dissimilar between the two regions. Factoring in other information, it is likely that the southern portion has been active in the past, but that shallow activity is currently limited to the northern portion of the superswell.« less
Lahnakoski, Juha M; Salmi, Juha; Jääskeläinen, Iiro P; Lampinen, Jouko; Glerean, Enrico; Tikka, Pia; Sams, Mikko
2012-01-01
Understanding how the brain processes stimuli in a rich natural environment is a fundamental goal of neuroscience. Here, we showed a feature film to 10 healthy volunteers during functional magnetic resonance imaging (fMRI) of hemodynamic brain activity. We then annotated auditory and visual features of the motion picture to inform analysis of the hemodynamic data. The annotations were fitted to both voxel-wise data and brain network time courses extracted by independent component analysis (ICA). Auditory annotations correlated with two independent components (IC) disclosing two functional networks, one responding to variety of auditory stimulation and another responding preferentially to speech but parts of the network also responding to non-verbal communication. Visual feature annotations correlated with four ICs delineating visual areas according to their sensitivity to different visual stimulus features. In comparison, a separate voxel-wise general linear model based analysis disclosed brain areas preferentially responding to sound energy, speech, music, visual contrast edges, body motion and hand motion which largely overlapped the results revealed by ICA. Differences between the results of IC- and voxel-based analyses demonstrate that thorough analysis of voxel time courses is important for understanding the activity of specific sub-areas of the functional networks, while ICA is a valuable tool for revealing novel information about functional connectivity which need not be explained by the predefined model. Our results encourage the use of naturalistic stimuli and tasks in cognitive neuroimaging to study how the brain processes stimuli in rich natural environments.
Predicting Key Events in the Popularity Evolution of Online Information.
Hu, Ying; Hu, Changjun; Fu, Shushen; Fang, Mingzhe; Xu, Wenwen
2017-01-01
The popularity of online information generally experiences a rising and falling evolution. This paper considers the "burst", "peak", and "fade" key events together as a representative summary of popularity evolution. We propose a novel prediction task-predicting when popularity undergoes these key events. It is of great importance to know when these three key events occur, because doing so helps recommendation systems, online marketing, and containment of rumors. However, it is very challenging to solve this new prediction task due to two issues. First, popularity evolution has high variation and can follow various patterns, so how can we identify "burst", "peak", and "fade" in different patterns of popularity evolution? Second, these events usually occur in a very short time, so how can we accurately yet promptly predict them? In this paper we address these two issues. To handle the first one, we use a simple moving average to smooth variation, and then a universal method is presented for different patterns to identify the key events in popularity evolution. To deal with the second one, we extract different types of features that may have an impact on the key events, and then a correlation analysis is conducted in the feature selection step to remove irrelevant and redundant features. The remaining features are used to train a machine learning model. The feature selection step improves prediction accuracy, and in order to emphasize prediction promptness, we design a new evaluation metric which considers both accuracy and promptness to evaluate our prediction task. Experimental and comparative results show the superiority of our prediction solution.
Lahnakoski, Juha M.; Salmi, Juha; Jääskeläinen, Iiro P.; Lampinen, Jouko; Glerean, Enrico; Tikka, Pia; Sams, Mikko
2012-01-01
Understanding how the brain processes stimuli in a rich natural environment is a fundamental goal of neuroscience. Here, we showed a feature film to 10 healthy volunteers during functional magnetic resonance imaging (fMRI) of hemodynamic brain activity. We then annotated auditory and visual features of the motion picture to inform analysis of the hemodynamic data. The annotations were fitted to both voxel-wise data and brain network time courses extracted by independent component analysis (ICA). Auditory annotations correlated with two independent components (IC) disclosing two functional networks, one responding to variety of auditory stimulation and another responding preferentially to speech but parts of the network also responding to non-verbal communication. Visual feature annotations correlated with four ICs delineating visual areas according to their sensitivity to different visual stimulus features. In comparison, a separate voxel-wise general linear model based analysis disclosed brain areas preferentially responding to sound energy, speech, music, visual contrast edges, body motion and hand motion which largely overlapped the results revealed by ICA. Differences between the results of IC- and voxel-based analyses demonstrate that thorough analysis of voxel time courses is important for understanding the activity of specific sub-areas of the functional networks, while ICA is a valuable tool for revealing novel information about functional connectivity which need not be explained by the predefined model. Our results encourage the use of naturalistic stimuli and tasks in cognitive neuroimaging to study how the brain processes stimuli in rich natural environments. PMID:22496909
Predicting Key Events in the Popularity Evolution of Online Information
Fu, Shushen; Fang, Mingzhe; Xu, Wenwen
2017-01-01
The popularity of online information generally experiences a rising and falling evolution. This paper considers the “burst”, “peak”, and “fade” key events together as a representative summary of popularity evolution. We propose a novel prediction task—predicting when popularity undergoes these key events. It is of great importance to know when these three key events occur, because doing so helps recommendation systems, online marketing, and containment of rumors. However, it is very challenging to solve this new prediction task due to two issues. First, popularity evolution has high variation and can follow various patterns, so how can we identify “burst”, “peak”, and “fade” in different patterns of popularity evolution? Second, these events usually occur in a very short time, so how can we accurately yet promptly predict them? In this paper we address these two issues. To handle the first one, we use a simple moving average to smooth variation, and then a universal method is presented for different patterns to identify the key events in popularity evolution. To deal with the second one, we extract different types of features that may have an impact on the key events, and then a correlation analysis is conducted in the feature selection step to remove irrelevant and redundant features. The remaining features are used to train a machine learning model. The feature selection step improves prediction accuracy, and in order to emphasize prediction promptness, we design a new evaluation metric which considers both accuracy and promptness to evaluate our prediction task. Experimental and comparative results show the superiority of our prediction solution. PMID:28046121
Phylogenetic Variation in the Silicon Composition of Plants
HODSON, M. J.; WHITE, P. J.; MEAD, A.; BROADLEY, M. R.
2005-01-01
• Background and Aims Silicon (Si) in plants provides structural support and improves tolerance to diseases, drought and metal toxicity. Shoot Si concentrations are generally considered to be greater in monocotyledonous than in non-monocot plant species. The phylogenetic variation in the shoot Si concentration of plants reported in the primary literature has been quantified. • Methods Studies were identified which reported Si concentrations in leaf or non-woody shoot tissues from at least two plant species growing in the same environment. Each study contained at least one species in common with another study. • Key Results Meta-analysis of the data revealed that, in general, ferns, gymnosperms and angiosperms accumulated less Si in their shoots than non-vascular plant species and horsetails. Within angiosperms and ferns, differences in shoot Si concentration between species grouped by their higher-level phylogenetic position were identified. Within the angiosperms, species from the commelinoid monocot orders Poales and Arecales accumulated substantially more Si in their shoots than species from other monocot clades. • Conclusions A high shoot Si concentration is not a general feature of monocot species. Information on the phylogenetic variation in shoot Si concentration may provide useful palaeoecological and archaeological information, and inform studies of the biogeochemical cycling of Si and those of the molecular genetics of Si uptake and transport in plants. PMID:16176944
Apollo program flight summary report: Apollo missions AS-201 through Apollo 16, revision 11
NASA Technical Reports Server (NTRS)
Holcomb, J. K.
1972-01-01
A summary of the Apollo flights from AS-201 through Apollo 16 is presented. The following subjects are discussed for each flight: (1) mission primary objectives, (2) principle objectives of the launch vehicle and spacecraft, (3) secondary objectives of the launch vehicle and spacecraft, (4) unusual features of the mission, (5) general information on the spacecraft and launch vehicle, (6) space vehicle and pre-launch data, and (7) recovery data.
NASA Technical Reports Server (NTRS)
Vos, R. G.; Beste, D. L.; Gregg, J.
1984-01-01
The User Manual for the Integrated Analysis Capability (IAC) Level 1 system is presented. The IAC system currently supports the thermal, structures, controls and system dynamics technologies, and its development is influenced by the requirements for design/analysis of large space systems. The system has many features which make it applicable to general problems in engineering, and to management of data and software. Information includes basic IAC operation, executive commands, modules, solution paths, data organization and storage, IAC utilities, and module implementation.
Modular, Hierarchical Learning By Artificial Neural Networks
NASA Technical Reports Server (NTRS)
Baldi, Pierre F.; Toomarian, Nikzad
1996-01-01
Modular and hierarchical approach to supervised learning by artificial neural networks leads to neural networks more structured than neural networks in which all neurons fully interconnected. These networks utilize general feedforward flow of information and sparse recurrent connections to achieve dynamical effects. The modular organization, sparsity of modular units and connections, and fact that learning is much more circumscribed are all attractive features for designing neural-network hardware. Learning streamlined by imitating some aspects of biological neural networks.
Radar HRRP Target Recognition Based on Stacked Autoencoder and Extreme Learning Machine
Liu, Yongxiang; Huo, Kai; Zhang, Zhongshuai
2018-01-01
A novel radar high-resolution range profile (HRRP) target recognition method based on a stacked autoencoder (SAE) and extreme learning machine (ELM) is presented in this paper. As a key component of deep structure, the SAE does not only learn features by making use of data, it also obtains feature expressions at different levels of data. However, with the deep structure, it is hard to achieve good generalization performance with a fast learning speed. ELM, as a new learning algorithm for single hidden layer feedforward neural networks (SLFNs), has attracted great interest from various fields for its fast learning speed and good generalization performance. However, ELM needs more hidden nodes than conventional tuning-based learning algorithms due to the random set of input weights and hidden biases. In addition, the existing ELM methods cannot utilize the class information of targets well. To solve this problem, a regularized ELM method based on the class information of the target is proposed. In this paper, SAE and the regularized ELM are combined to make full use of their advantages and make up for each of their shortcomings. The effectiveness of the proposed method is demonstrated by experiments with measured radar HRRP data. The experimental results show that the proposed method can achieve good performance in the two aspects of real-time and accuracy, especially when only a few training samples are available. PMID:29320453
Radar HRRP Target Recognition Based on Stacked Autoencoder and Extreme Learning Machine.
Zhao, Feixiang; Liu, Yongxiang; Huo, Kai; Zhang, Shuanghui; Zhang, Zhongshuai
2018-01-10
A novel radar high-resolution range profile (HRRP) target recognition method based on a stacked autoencoder (SAE) and extreme learning machine (ELM) is presented in this paper. As a key component of deep structure, the SAE does not only learn features by making use of data, it also obtains feature expressions at different levels of data. However, with the deep structure, it is hard to achieve good generalization performance with a fast learning speed. ELM, as a new learning algorithm for single hidden layer feedforward neural networks (SLFNs), has attracted great interest from various fields for its fast learning speed and good generalization performance. However, ELM needs more hidden nodes than conventional tuning-based learning algorithms due to the random set of input weights and hidden biases. In addition, the existing ELM methods cannot utilize the class information of targets well. To solve this problem, a regularized ELM method based on the class information of the target is proposed. In this paper, SAE and the regularized ELM are combined to make full use of their advantages and make up for each of their shortcomings. The effectiveness of the proposed method is demonstrated by experiments with measured radar HRRP data. The experimental results show that the proposed method can achieve good performance in the two aspects of real-time and accuracy, especially when only a few training samples are available.
Cross Flow Parameter Calculation for Aerodynamic Analysis
NASA Technical Reports Server (NTRS)
Norman, David, Jr. (Inventor)
2014-01-01
A system and method for determining a cross flow angle for a feature on a structure. A processor unit receives location information identifying a location of the feature on the structure, determines an angle of the feature, identifies flow information for the location, determines a flow angle using the flow information, and determines the cross flow angle for the feature using the flow angle and the angle of the feature. The flow information describes a flow of fluid across the structure. The flow angle comprises an angle of the flow of fluid across the structure for the location of the feature.
How doing a dynamical analysis of gait movement may provide information about Autism
NASA Astrophysics Data System (ADS)
Wu, D.; Torres, E.; Nguyen, J.; Mistry, S.; Whyatt, C.; Kalampratsidou, V.; Kolevzon, A.; Jose, J.
Individuals with Autism Spectrum Disorder (ASD) are known to have deficits in language and social skills. They also have deficits on how they move. Why individuals get ASD? It is not generally known. There is, however, one particular group of children with a SHANK3 gene deficiency (Phelan-McDermid Syndrome (PMDS)) that present symptoms similar to ASD. We have been searching for universal mechanism in ASD going beyond the usual heterogeneous ASD symptoms. We studied motions in gaits for both PMDS patients and idiopathic ASD. We have examined their motions continuously at milliseconds time scale, away from naked eye detection. Gait is a complex process, requiring a complex integration and coordination of different joints' motions. Significant information about the development and/or deficits in the sensory system is hidden in our gaits. We discovered that the speed smoothness in feet motion during gaits is a critical feature that provides a significant distinction between subjects with ASD and typical controls. The differences in appearance of the speed fluctuations suggested a different coordination mechanism in subjects with disorders. Our work provides a very important feature in gait motion that has significant physiological information.
Khodamoradi, Abdolvahed; Ghaffari, Mohammad Payam; Daryabeygi-Khotbehsara, Reza; Sajadi, Haniye Sadat; Majdzadeh, Reza
2018-01-01
Informal patients' payments (IPPs) is a sensitive subject. The aim of current study was to assess the trends in informal payment studies and explore methods of IPPs measurement, prevalence, and features (payment type, volume, and receiver) in various contexts. A search strategy was developed to identify peer-reviewed articles addressing informal payments on PubMed, Science Direct, Web of Science, Scopus, and CINAHL. A total of 1252 studies were identified initially. After screening process, 38 studies were included in the systematic review. The selected studies were appraised, and findings were synthesized. Among selected studies, quantitative approaches were mostly used for measuring IPPs from general public and patients' perspective, and qualitative methods mainly targeted health care providers. Reported IPP prevalence in selected articles ranges between 2% and 80%, more prevalent in the inpatient sector than in outpatient. There are a number of strategies for the measurement of IPPs with different strengths and weaknesses. Most applied strategies for general public were quantitative surveys recruiting more than 1000 participants using a face-to-face structured interview, and then qualitative studies on less than 150 health care providers, with focus group discussion. This review provides a comprehensive picture of current informal patients' payments measurement tools, which helps researchers in future investigations. Copyright © 2017 John Wiley & Sons, Ltd.
Timm, Jana; Weise, Annekathrin; Grimm, Sabine; Schröger, Erich
2011-01-01
The infrequent occurrence of a transient feature (deviance; e.g., frequency modulation, FM) in one of the regular occurring sinusoidal tones (standards) elicits the deviance related mismatch negativity (MMN) component of the event-related brain potential. Based on a memory-based comparison, MMN reflects the mismatch between the representations of incoming and standard sounds. The present study investigated to what extent the infrequent exclusion of an FM is detected by the MMN system. For that purpose we measured MMN to deviances that either consisted of the exclusion or inclusion of an FM at an early or late position within the sound that was present or absent, respectively, in the standard. According to the information-content hypothesis, deviance detection relies on the difference in informational content of the deviant relative to that of the standard. As this difference between deviants with FM and standards without FM is the same as in the reversed case, comparable MMNs should be elicited to FM inclusions and exclusions. According to the feature-detector hypothesis, however, the deviance detection depends on the increased activation of feature detectors to additional sound features. Thus, rare exclusions of the FM should elicit no or smaller MMN than FM inclusions. In passive listening condition, MMN was obtained only for the early inclusion, but not for the exclusions nor for the late inclusion of an FM. This asymmetry in automatic deviance detection seems to partly reflect the contribution of feature detectors even though it cannot fully account for the missing MMN to late FM inclusions. Importantly, the behavioral deviance detection performance in the active listening condition did not reveal such an asymmetry, suggesting that the intentional detection of the deviants is based on the difference in informational content. On a more general level, the results partly support the “fresh-afferent” account or an extended memory-comparison based account of MMN. PMID:21852979
2010-01-01
Background Protein-protein interaction (PPI) plays essential roles in cellular functions. The cost, time and other limitations associated with the current experimental methods have motivated the development of computational methods for predicting PPIs. As protein interactions generally occur via domains instead of the whole molecules, predicting domain-domain interaction (DDI) is an important step toward PPI prediction. Computational methods developed so far have utilized information from various sources at different levels, from primary sequences, to molecular structures, to evolutionary profiles. Results In this paper, we propose a computational method to predict DDI using support vector machines (SVMs), based on domains represented as interaction profile hidden Markov models (ipHMM) where interacting residues in domains are explicitly modeled according to the three dimensional structural information available at the Protein Data Bank (PDB). Features about the domains are extracted first as the Fisher scores derived from the ipHMM and then selected using singular value decomposition (SVD). Domain pairs are represented by concatenating their selected feature vectors, and classified by a support vector machine trained on these feature vectors. The method is tested by leave-one-out cross validation experiments with a set of interacting protein pairs adopted from the 3DID database. The prediction accuracy has shown significant improvement as compared to InterPreTS (Interaction Prediction through Tertiary Structure), an existing method for PPI prediction that also uses the sequences and complexes of known 3D structure. Conclusions We show that domain-domain interaction prediction can be significantly enhanced by exploiting information inherent in the domain profiles via feature selection based on Fisher scores, singular value decomposition and supervised learning based on support vector machines. Datasets and source code are freely available on the web at http://liao.cis.udel.edu/pub/svdsvm. Implemented in Matlab and supported on Linux and MS Windows. PMID:21034480
Multi-Source Multi-Target Dictionary Learning for Prediction of Cognitive Decline.
Zhang, Jie; Li, Qingyang; Caselli, Richard J; Thompson, Paul M; Ye, Jieping; Wang, Yalin
2017-06-01
Alzheimer's Disease (AD) is the most common type of dementia. Identifying correct biomarkers may determine pre-symptomatic AD subjects and enable early intervention. Recently, Multi-task sparse feature learning has been successfully applied to many computer vision and biomedical informatics researches. It aims to improve the generalization performance by exploiting the shared features among different tasks. However, most of the existing algorithms are formulated as a supervised learning scheme. Its drawback is with either insufficient feature numbers or missing label information. To address these challenges, we formulate an unsupervised framework for multi-task sparse feature learning based on a novel dictionary learning algorithm. To solve the unsupervised learning problem, we propose a two-stage Multi-Source Multi-Target Dictionary Learning (MMDL) algorithm. In stage 1, we propose a multi-source dictionary learning method to utilize the common and individual sparse features in different time slots. In stage 2, supported by a rigorous theoretical analysis, we develop a multi-task learning method to solve the missing label problem. Empirical studies on an N = 3970 longitudinal brain image data set, which involves 2 sources and 5 targets, demonstrate the improved prediction accuracy and speed efficiency of MMDL in comparison with other state-of-the-art algorithms.
NASA Astrophysics Data System (ADS)
Zhang, Sheng; Wang, Jian; Tang, Chao-Jing
2012-06-01
Counterfactual quantum cryptography, recently proposed by Noh, is featured with no transmission of signal particles. This exhibits evident security advantages, such as its immunity to the well-known photon-number-splitting attack. In this paper, the theoretical security of counterfactual quantum cryptography protocol against the general intercept-resend attacks is proved by bounding the information of an eavesdropper Eve more tightly than in Yin's proposal [Phys. Rev. A 82 042335 (2010)]. It is also shown that practical counterfactual quantum cryptography implementations may be vulnerable when equipped with imperfect apparatuses, by proving that a negative key rate can be achieved when Eve launches a time-shift attack based on imperfect detector efficiency.
Takahama, Sachiko; Saiki, Jun
2014-01-01
Information on an object's features bound to its location is very important for maintaining object representations in visual working memory. Interactions with dynamic multi-dimensional objects in an external environment require complex cognitive control, including the selective maintenance of feature-location binding. Here, we used event-related functional magnetic resonance imaging to investigate brain activity and functional connectivity related to the maintenance of complex feature-location binding. Participants were required to detect task-relevant changes in feature-location binding between objects defined by color, orientation, and location. We compared a complex binding task requiring complex feature-location binding (color-orientation-location) with a simple binding task in which simple feature-location binding, such as color-location, was task-relevant and the other feature was task-irrelevant. Univariate analyses showed that the dorsolateral prefrontal cortex (DLPFC), hippocampus, and frontoparietal network were activated during the maintenance of complex feature-location binding. Functional connectivity analyses indicated cooperation between the inferior precentral sulcus (infPreCS), DLPFC, and hippocampus during the maintenance of complex feature-location binding. In contrast, the connectivity for the spatial updating of simple feature-location binding determined by reanalyzing the data from Takahama et al. (2010) demonstrated that the superior parietal lobule (SPL) cooperated with the DLPFC and hippocampus. These results suggest that the connectivity for complex feature-location binding does not simply reflect general memory load and that the DLPFC and hippocampus flexibly modulate the dorsal frontoparietal network, depending on the task requirements, with the infPreCS involved in the maintenance of complex feature-location binding and the SPL involved in the spatial updating of simple feature-location binding. PMID:24917833
Takahama, Sachiko; Saiki, Jun
2014-01-01
Information on an object's features bound to its location is very important for maintaining object representations in visual working memory. Interactions with dynamic multi-dimensional objects in an external environment require complex cognitive control, including the selective maintenance of feature-location binding. Here, we used event-related functional magnetic resonance imaging to investigate brain activity and functional connectivity related to the maintenance of complex feature-location binding. Participants were required to detect task-relevant changes in feature-location binding between objects defined by color, orientation, and location. We compared a complex binding task requiring complex feature-location binding (color-orientation-location) with a simple binding task in which simple feature-location binding, such as color-location, was task-relevant and the other feature was task-irrelevant. Univariate analyses showed that the dorsolateral prefrontal cortex (DLPFC), hippocampus, and frontoparietal network were activated during the maintenance of complex feature-location binding. Functional connectivity analyses indicated cooperation between the inferior precentral sulcus (infPreCS), DLPFC, and hippocampus during the maintenance of complex feature-location binding. In contrast, the connectivity for the spatial updating of simple feature-location binding determined by reanalyzing the data from Takahama et al. (2010) demonstrated that the superior parietal lobule (SPL) cooperated with the DLPFC and hippocampus. These results suggest that the connectivity for complex feature-location binding does not simply reflect general memory load and that the DLPFC and hippocampus flexibly modulate the dorsal frontoparietal network, depending on the task requirements, with the infPreCS involved in the maintenance of complex feature-location binding and the SPL involved in the spatial updating of simple feature-location binding.
NASA Astrophysics Data System (ADS)
Zoran, Maria; Savastru, Roxana; Savastru, Dan; Tautan, Marina; Miclos, Sorin; Cristescu, Luminita; Carstea, Elfrida; Baschir, Laurentiu
2010-05-01
Urban systems play a vital role in social and economic development in all countries. Their environmental changes can be investigated on different spatial and temporal scales. Urban and peri-urban environment dynamics is of great interest for future planning and decision making as well as in frame of local and regional changes. Changes in urban land cover include changes in biotic diversity, actual and potential primary productivity, soil quality, runoff, and sedimentation rates, and cannot be well understood without the knowledge of land use change that drives them. The study focuses on the assessment of environmental features changes for Bucharest metropolitan area, Romania by satellite remote sensing and in-situ monitoring data. Rational feature selection from the varieties of spectral channels in the optical wavelengths of electromagnetic spectrum (VIS and NIR) is very important for effective analysis and information extraction of remote sensing data. Based on comprehensively analyses of the spectral characteristics of remote sensing data is possibly to derive environmental changes in urban areas. The information quantity contained in a band is an important parameter in evaluating the band. The deviation and entropy are often used to show information amount. Feature selection is one of the most important steps in recognition and classification of remote sensing images. Therefore, it is necessary to select features before classification. The optimal features are those that can be used to distinguish objects easily and correctly. Three factors—the information quantity of bands, the correlation between bands and the spectral characteristic (e.g. absorption specialty) of classified objects in test area Bucharest have been considered in our study. As, the spectral characteristic of an object is influenced by many factors, being difficult to define optimal feature parameters to distinguish all the objects in a whole area, a method of multi-level feature selection was suggested. On the basis of analyzing the information quantity of bands, correlation between different bands, spectral absorption characteristics of objects and object separability in bands, a fundamental method of optimum band selection and feature extraction from remote sensing data was discussed. Spectral signatures of different terrain features have been used to extract structural patterns aiming to separate surface units and to classify the general categories. The synergetic analysis and interpretation of the different satellite images (LANDSAT: TM, ETM; MODIS, IKONOS) acquired over a period of more than 20 years reveals significant aspects regarding impacts of climate and anthropogenic changes on urban/periurban environment. It was delimited residential zones of industrial zones which are very often a source of pollution. An important role has urban green cover assessment. Have been emphasized the particularities of the functional zones from different points of view: architectural, streets and urban surface traffic, some components of urban infrastructure as well as habitat quality. The growth of Bucharest urban area in Romania has been a result of a rapid process of industrialization, and also of the increase of urban population. Information on the spatial pattern and temporal dynamics of land cover and land use of urban areas is critical to address a wide range of practical problems relating to urban regeneration, urban sustainability and rational planning policy.
Overgrowth syndromes with vascular anomalies.
Blei, Francine
2015-04-01
Overgrowth syndromes with vascular anomalies encompass entities with a vascular anomaly as the predominant feature vs those syndromes with predominant somatic overgrowth and a vascular anomaly as a more minor component. The focus of this article is to categorize these syndromes phenotypically, including updated clinical criteria, radiologic features, evaluation, management issues, pathophysiology, and genetic information. A literature review was conducted in PubMed using key words "overgrowth syndromes and vascular anomalies" as well as specific literature reviews for each entity and supportive genetic information (e.g., somatic mosaicism). Additional searches in OMIM and Gene Reviews were conducted for each syndrome. Disease entities were categorized by predominant clinical features, known genetic information, and putative affected signaling pathway. Overgrowth syndromes with vascular anomalies are a heterogeneous group of disorders, often with variable clinical expression, due to germline or somatic mutations. Overgrowth can be focal (e.g., macrocephaly) or generalized, often asymmetrically (and/or mosaically) distributed. All germ layers may be affected, and the abnormalities may be progressive. Patients with overgrowth syndromes may be at an increased risk for malignancies. Practitioners should be attentive to patients having syndromes with overgrowth and vascular defects. These patients require proactive evaluation, referral to appropriate specialists, and in some cases, early monitoring for potential malignancies. Progress in identifying vascular anomaly-related overgrowth syndromes and their genetic etiology has been robust in the past decade and is contributing to genetically based prenatal diagnosis and new therapies targeting the putative causative genetic mutations. Copyright © 2015 Mosby, Inc. All rights reserved.
Raine, Rosalind; Cartwright, Martin; Richens, Yana; Mahamed, Zuhura; Smith, Debbie
2010-07-01
To identify key features of communication across antenatal (prenatal) care that are evaluated positively or negatively by service users. Focus groups and semi-structured interviews were used to explore communication experiences of thirty pregnant women from diverse social and ethnic backgrounds affiliated to a large London hospital. Data were analysed using thematic analysis. Women reported a wide diversity of experiences. From the users' perspective, constructive communication on the part of health care providers was characterised by an empathic conversational style, openness to questions, allowing sufficient time to talk through any concerns, and pro-active contact by providers (e.g. text message appointment reminders). These features created reassurance, facilitated information exchange, improved appointment attendance and fostered tolerance in stressful situations. Salient features of poor communication were a lack of information provision, especially about the overall arrangement and the purpose of antenatal care, insufficient discussion about possible problems with the pregnancy and discourteous styles of interaction. Poor communication led some women to become assertive to address their needs; others became reluctant to actively engage with providers. General Practitioners need to be better integrated into antenatal care, more information should be provided about the pattern and purpose of the care women receive during pregnancy, and new technologies should be used to facilitate interactions between women and their healthcare providers. Providers require communications training to encourage empathic interactions that promote constructive provider-user relationships and encourage women to engage effectively and access the care they need.
Modular and scalable RESTful API to sustain STAR collaboration's record keeping
NASA Astrophysics Data System (ADS)
Arkhipkin, D.; Lauret, J.; Shanmuganathan, P. V.
2015-12-01
STAR collaboration's record system is a collection of heterogeneous and sparse information associated to each members and institutions. In its original incarnation, only flat information was stored revealing many restrictions such as the lack of historical change information, the inability to keep track of members leaving and re-joining STAR, or the ability to easily extend the saved information as new requirements appeared. In mid-2013, a new project was launched covering an extensive set of revisited requirements. The requirements led us to a design based on a RESTful API, back-end storage engine relying on key/value pair data representation model coupled with a tiered architecture design. This design was motivated by the fact that unifying many STAR tools, relying on the same business logic and storage engine, was a key and central feature for the maintainability and presentation of records. This central service API would leave no ambiguities and provide easy service integration between STAR tools. The new design stores the changes in records dynamically and allows tracking the changes chronologically. The storage engine is extensible as new field of information emerges (member specific or general) without affecting the presentation or the business logic layers. The new record system features a convenient administrative interface, fuzzy algorithms for data entry and search, and provides basic statistics and graphs. Finally, this modular approach is supplemented with access control, allowing private information and administrative operations to be hidden away from public eyes.
Classification of Microarray Data Using Kernel Fuzzy Inference System
Kumar Rath, Santanu
2014-01-01
The DNA microarray classification technique has gained more popularity in both research and practice. In real data analysis, such as microarray data, the dataset contains a huge number of insignificant and irrelevant features that tend to lose useful information. Classes with high relevance and feature sets with high significance are generally referred for the selected features, which determine the samples classification into their respective classes. In this paper, kernel fuzzy inference system (K-FIS) algorithm is applied to classify the microarray data (leukemia) using t-test as a feature selection method. Kernel functions are used to map original data points into a higher-dimensional (possibly infinite-dimensional) feature space defined by a (usually nonlinear) function ϕ through a mathematical process called the kernel trick. This paper also presents a comparative study for classification using K-FIS along with support vector machine (SVM) for different set of features (genes). Performance parameters available in the literature such as precision, recall, specificity, F-measure, ROC curve, and accuracy are considered to analyze the efficiency of the classification model. From the proposed approach, it is apparent that K-FIS model obtains similar results when compared with SVM model. This is an indication that the proposed approach relies on kernel function. PMID:27433543
Gait recognition based on Gabor wavelets and modified gait energy image for human identification
NASA Astrophysics Data System (ADS)
Huang, Deng-Yuan; Lin, Ta-Wei; Hu, Wu-Chih; Cheng, Chih-Hsiang
2013-10-01
This paper proposes a method for recognizing human identity using gait features based on Gabor wavelets and modified gait energy images (GEIs). Identity recognition by gait generally involves gait representation, extraction, and classification. In this work, a modified GEI convolved with an ensemble of Gabor wavelets is proposed as a gait feature. Principal component analysis is then used to project the Gabor-wavelet-based gait features into a lower-dimension feature space for subsequent classification. Finally, support vector machine classifiers based on a radial basis function kernel are trained and utilized to recognize human identity. The major contributions of this paper are as follows: (1) the consideration of the shadow effect to yield a more complete segmentation of gait silhouettes; (2) the utilization of motion estimation to track people when walkers overlap; and (3) the derivation of modified GEIs to extract more useful gait information. Extensive performance evaluation shows a great improvement of recognition accuracy due to the use of shadow removal, motion estimation, and gait representation using the modified GEIs and Gabor wavelets.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simkin, T.; Tilling, R.I.; Taggart, J.N.
The Earth's physiographic features overlain by its volcanoes, earthquake epicenters, and the movement of its major tectonic plates are shown in this map. This computer-generated map of the world provides a base that shows the topography of the land surface and the sea floor; the additions of color and shaded relief help to distinguish significant features. From the Volcano Reference file of the Smithsonian Institution, nearly 1,450 volcanoes active during the past 10,000 yr are plotted on the map in four categories. From the files of the National Earthquake Information Center (US Geological Survey), epicenters selected from 1,300 large eventsmore » (magnitude {>=} 7.0) from 1987 onward and from 140,000 instrumentally recorded earthquakes (magnitude {>=} 4.0) from 1960 to the present are plotted on this map according to two magnitude categories and two depth categories. This special map is intended as a teaching aid for classroom use and as a general reference for research. It is designed to show prominent global features when viewed from a distance; more detailed features are visible on closer inspection.« less
The effect of emergent features on judgments of quantity in configural and separable displays.
Peebles, David
2008-06-01
Two experiments investigated effects of emergent features on perceptual judgments of comparative magnitude in three diagrammatic representations: kiviat charts, bar graphs, and line graphs. Experiment 1 required participants to compare individual values; whereas in Experiment 2 participants had to integrate several values to produce a global comparison. In Experiment 1, emergent features of the diagrams resulted in significant distortions of magnitude judgments, each related to a common geometric illusion. Emergent features are also widely believed to underlie the general superiority of configural displays, such as kiviat charts, for tasks requiring the integration of information. Experiment 2 tested the extent of this benefit using diagrams with a wide range of values. Contrary to the results of previous studies, the configural display produced the poorest performance compared to the more separable displays. Moreover, the pattern of responses suggests that kiviat users switched from an integration strategy to a sequential one depending on the shape of the diagram. The experiments demonstrate the powerful interaction between emergent visual properties and cognition and reveal limits to the benefits of configural displays for integration tasks. (c) 2008 APA, all rights reserved
Khan, Hassan Aqeel; Gore, Amit; Ashe, Jeff; Chakrabartty, Shantanu
2017-07-01
Physical activities are known to introduce motion artifacts in electrical impedance plethysmographic (EIP) sensors. Existing literature considers motion artifacts as a nuisance and generally discards the artifact containing portion of the sensor output. This paper examines the notion of exploiting motion artifacts for detecting the underlying physical activities which give rise to the artifacts in question. In particular, we investigate whether the artifact pattern associated with a physical activity is unique; and does it vary from one human-subject to another? Data was recorded from 19 adult human-subjects while conducting 5 distinct, artifact inducing, activities. A set of novel features based on the time-frequency signatures of the sensor outputs are then constructed. Our analysis demonstrates that these features enable high accuracy detection of the underlying physical activity. Using an SVM classifier we are able to differentiate between 5 distinct physical activities (coughing, reaching, walking, eating and rolling-on-bed) with an average accuracy of 85.46%. Classification is performed solely using features designed specifically to capture the time-frequency signatures of different physical activities. This enables us to measure both respiratory and motion information using only one type of sensor. This is in contrast to conventional approaches to physical activity monitoring; which rely on additional hardware such as accelerometers to capture activity information.
Assessing the features of extreme smog in China and the differentiated treatment strategy
NASA Astrophysics Data System (ADS)
Deng, Lu; Zhang, Zhengjun
2018-01-01
Extreme smog can have potentially harmful effects on human health, the economy and daily life. However, the average (mean) values do not provide strategically useful information on the hazard analysis and control of extreme smog. This article investigates China's smog extremes by applying extreme value analysis to hourly PM2.5 data from 2014 to 2016 obtained from monitoring stations across China. By fitting a generalized extreme value (GEV) distribution to exceedances over a station-specific extreme smog level at each monitoring location, all study stations are grouped into eight different categories based on the estimated mean and shape parameter values of fitted GEV distributions. The extreme features characterized by the mean of the fitted extreme value distribution, the maximum frequency and the tail index of extreme smog at each location are analysed. These features can provide useful information for central/local government to conduct differentiated treatments in cities within different categories and conduct similar prevention goals and control strategies among those cities belonging to the same category in a range of areas. Furthermore, hazardous hours, breaking probability and the 1-year return level of each station are demonstrated by category, based on which the future control and reduction targets of extreme smog are proposed for the cities of Beijing, Tianjin and Hebei as an example.
Visualizing Uncertainty of Point Phenomena by Redesigned Error Ellipses
NASA Astrophysics Data System (ADS)
Murphy, Christian E.
2018-05-01
Visualizing uncertainty remains one of the great challenges in modern cartography. There is no overarching strategy to display the nature of uncertainty, as an effective and efficient visualization depends, besides on the spatial data feature type, heavily on the type of uncertainty. This work presents a design strategy to visualize uncertainty con-nected to point features. The error ellipse, well-known from mathematical statistics, is adapted to display the uncer-tainty of point information originating from spatial generalization. Modified designs of the error ellipse show the po-tential of quantitative and qualitative symbolization and simultaneous point based uncertainty symbolization. The user can intuitively depict the centers of gravity, the major orientation of the point arrays as well as estimate the ex-tents and possible spatial distributions of multiple point phenomena. The error ellipse represents uncertainty in an intuitive way, particularly suitable for laymen. Furthermore it is shown how applicable an adapted design of the er-ror ellipse is to display the uncertainty of point features originating from incomplete data. The suitability of the error ellipse to display the uncertainty of point information is demonstrated within two showcases: (1) the analysis of formations of association football players, and (2) uncertain positioning of events on maps for the media.
Combining facial dynamics with appearance for age estimation.
Dibeklioglu, Hamdi; Alnajar, Fares; Ali Salah, Albert; Gevers, Theo
2015-06-01
Estimating the age of a human from the captured images of his/her face is a challenging problem. In general, the existing approaches to this problem use appearance features only. In this paper, we show that in addition to appearance information, facial dynamics can be leveraged in age estimation. We propose a method to extract and use dynamic features for age estimation, using a person's smile. Our approach is tested on a large, gender-balanced database with 400 subjects, with an age range between 8 and 76. In addition, we introduce a new database on posed disgust expressions with 324 subjects in the same age range, and evaluate the reliability of the proposed approach when used with another expression. State-of-the-art appearance-based age estimation methods from the literature are implemented as baseline. We demonstrate that for each of these methods, the addition of the proposed dynamic features results in statistically significant improvement. We further propose a novel hierarchical age estimation architecture based on adaptive age grouping. We test our approach extensively, including an exploration of spontaneous versus posed smile dynamics, and gender-specific age estimation. We show that using spontaneity information reduces the mean absolute error by up to 21%, advancing the state of the art for facial age estimation.
Magidson, J F; Collado-Rodriguez, A; Madan, A; Perez-Camoirano, N A; Galloway, S K; Borckardt, J J; Campbell, W K; Miller, J D
2012-04-01
Narcissistic personality disorder (NPD) is characterized by an unrealistic need for admiration, lack of empathy toward others, and feelings of superiority. NPD presents a unique and significant challenge in clinical practice, particularly in medical settings with limited provider contact time, as health professionals treat individuals who often require excessive admiration and have competing treatment needs. This practice review highlights real case examples across three distinct medically oriented clinical settings (inpatient and outpatient behavioral medicine and a Level I trauma center) to demonstrate the difficult and compromising situations that providers face when treating patients with general medical conditions and comorbid narcissistic personality features. The main goal of this article is to discuss the various challenges and obstacles associated with these cases in medical settings and discuss some strategies that may prove successful. A second goal is to bridge diverse conceptualizations of narcissism/NPD through the discussion of theoretical and empirical perspectives that can inform understanding of the clinical examples. Despite differing perspectives regarding the underlying motivation of narcissistic behavior, this practice review highlights that these paradigms can be integrated when sharing the same ultimate goal: to improve delivery of care across medically oriented clinical settings for patients with narcissistic features.
Seeland, Marco; Rzanny, Michael; Alaqraa, Nedal; Wäldchen, Jana; Mäder, Patrick
2017-01-01
Steady improvements of image description methods induced a growing interest in image-based plant species classification, a task vital to the study of biodiversity and ecological sensitivity. Various techniques have been proposed for general object classification over the past years and several of them have already been studied for plant species classification. However, results of these studies are selective in the evaluated steps of a classification pipeline, in the utilized datasets for evaluation, and in the compared baseline methods. No study is available that evaluates the main competing methods for building an image representation on the same datasets allowing for generalized findings regarding flower-based plant species classification. The aim of this paper is to comparatively evaluate methods, method combinations, and their parameters towards classification accuracy. The investigated methods span from detection, extraction, fusion, pooling, to encoding of local features for quantifying shape and color information of flower images. We selected the flower image datasets Oxford Flower 17 and Oxford Flower 102 as well as our own Jena Flower 30 dataset for our experiments. Findings show large differences among the various studied techniques and that their wisely chosen orchestration allows for high accuracies in species classification. We further found that true local feature detectors in combination with advanced encoding methods yield higher classification results at lower computational costs compared to commonly used dense sampling and spatial pooling methods. Color was found to be an indispensable feature for high classification results, especially while preserving spatial correspondence to gray-level features. In result, our study provides a comprehensive overview of competing techniques and the implications of their main parameters for flower-based plant species classification. PMID:28234999
Object technology: A white paper
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jordan, S.R.; Arrowood, L.F.; Cain, W.D.
1992-05-11
Object-Oriented Technology (OOT), although not a new paradigm, has recently been prominently featured in the trade press and even general business publications. Indeed, the promises of object technology are alluring: the ability to handle complex design and engineering information through the full manufacturing production life cycle or to manipulate multimedia information, and the ability to improve programmer productivity in creating and maintaining high quality software. Groups at a number of the DOE facilities have been exploring the use of object technology for engineering, business, and other applications. In this white paper, the technology is explored thoroughly and compared with previousmore » means of developing software and storing databases of information. Several specific projects within the DOE Complex are described, and the state of the commercial marketplace is indicated.« less
Finlayson, Nonie J.; Golomb, Julie D.
2016-01-01
A fundamental aspect of human visual perception is the ability to recognize and locate objects in the environment. Importantly, our environment is predominantly three-dimensional (3D), but while there is considerable research exploring the binding of object features and location, it is unknown how depth information interacts with features in the object binding process. A recent paradigm called the spatial congruency bias demonstrated that 2D location is fundamentally bound to object features (Golomb, Kupitz, & Thiemann, 2014), such that irrelevant location information biases judgments of object features, but irrelevant feature information does not bias judgments of location or other features. Here, using the spatial congruency bias paradigm, we asked whether depth is processed as another type of location, or more like other features. We initially found that depth cued by binocular disparity biased judgments of object color. However, this result seemed to be driven more by the disparity differences than the depth percept: Depth cued by occlusion and size did not bias color judgments, whereas vertical disparity information (with no depth percept) did bias color judgments. Our results suggest that despite the 3D nature of our visual environment, only 2D location information – not position-in-depth – seems to be automatically bound to object features, with depth information processed more similarly to other features than to 2D location. PMID:27468654
Finlayson, Nonie J; Golomb, Julie D
2016-10-01
A fundamental aspect of human visual perception is the ability to recognize and locate objects in the environment. Importantly, our environment is predominantly three-dimensional (3D), but while there is considerable research exploring the binding of object features and location, it is unknown how depth information interacts with features in the object binding process. A recent paradigm called the spatial congruency bias demonstrated that 2D location is fundamentally bound to object features, such that irrelevant location information biases judgments of object features, but irrelevant feature information does not bias judgments of location or other features. Here, using the spatial congruency bias paradigm, we asked whether depth is processed as another type of location, or more like other features. We initially found that depth cued by binocular disparity biased judgments of object color. However, this result seemed to be driven more by the disparity differences than the depth percept: Depth cued by occlusion and size did not bias color judgments, whereas vertical disparity information (with no depth percept) did bias color judgments. Our results suggest that despite the 3D nature of our visual environment, only 2D location information - not position-in-depth - seems to be automatically bound to object features, with depth information processed more similarly to other features than to 2D location. Copyright © 2016 Elsevier Ltd. All rights reserved.
Forecast of the general aviation air traffic control environment for the 1980's
NASA Technical Reports Server (NTRS)
Hoffman, W. C.; Hollister, W. M.
1976-01-01
The critical information required for the design of a reliable, low cost, advanced avionics system which would enhance the safety and utility of general aviation is stipulated. Sufficient data is accumulated upon which industry can base the design of a reasonably priced system having the capability required by general aviation in and beyond the 1980's. The key features of the Air Traffic Control (ATC) system are: a discrete address beacon system, a separation assurance system, area navigation, a microwave landing system, upgraded ATC automation, airport surface traffic control, a wake vortex avoidance system, flight service stations, and aeronautical satellites. The critical parameters that are necessary for component design are identified. The four primary functions of ATC (control, surveillance, navigation, and communication) and their impact on the onboard avionics system design are assessed.
NASA Astrophysics Data System (ADS)
Sahoo, Madhumita; Sahoo, Satiprasad; Dhar, Anirban; Pradhan, Biswajeet
2016-10-01
Groundwater vulnerability assessment has been an accepted practice to identify the zones with relatively increased potential for groundwater contamination. DRASTIC is the most popular secondary information-based vulnerability assessment approach. Original DRASTIC approach considers relative importance of features/sub-features based on subjective weighting/rating values. However variability of features at a smaller scale is not reflected in this subjective vulnerability assessment process. In contrast to the subjective approach, the objective weighting-based methods provide flexibility in weight assignment depending on the variation of the local system. However experts' opinion is not directly considered in the objective weighting-based methods. Thus effectiveness of both subjective and objective weighting-based approaches needs to be evaluated. In the present study, three methods - Entropy information method (E-DRASTIC), Fuzzy pattern recognition method (F-DRASTIC) and Single parameter sensitivity analysis (SA-DRASTIC), were used to modify the weights of the original DRASTIC features to include local variability. Moreover, a grey incidence analysis was used to evaluate the relative performance of subjective (DRASTIC and SA-DRASTIC) and objective (E-DRASTIC and F-DRASTIC) weighting-based methods. The performance of the developed methodology was tested in an urban area of Kanpur City, India. Relative performance of the subjective and objective methods varies with the choice of water quality parameters. This methodology can be applied without/with suitable modification. These evaluations establish the potential applicability of the methodology for general vulnerability assessment in urban context.
An Interval Type-2 Neural Fuzzy System for Online System Identification and Feature Elimination.
Lin, Chin-Teng; Pal, Nikhil R; Wu, Shang-Lin; Liu, Yu-Ting; Lin, Yang-Yin
2015-07-01
We propose an integrated mechanism for discarding derogatory features and extraction of fuzzy rules based on an interval type-2 neural fuzzy system (NFS)-in fact, it is a more general scheme that can discard bad features, irrelevant antecedent clauses, and even irrelevant rules. High-dimensional input variable and a large number of rules not only enhance the computational complexity of NFSs but also reduce their interpretability. Therefore, a mechanism for simultaneous extraction of fuzzy rules and reducing the impact of (or eliminating) the inferior features is necessary. The proposed approach, namely an interval type-2 Neural Fuzzy System for online System Identification and Feature Elimination (IT2NFS-SIFE), uses type-2 fuzzy sets to model uncertainties associated with information and data in designing the knowledge base. The consequent part of the IT2NFS-SIFE is of Takagi-Sugeno-Kang type with interval weights. The IT2NFS-SIFE possesses a self-evolving property that can automatically generate fuzzy rules. The poor features can be discarded through the concept of a membership modulator. The antecedent and modulator weights are learned using a gradient descent algorithm. The consequent part weights are tuned via the rule-ordered Kalman filter algorithm to enhance learning effectiveness. Simulation results show that IT2NFS-SIFE not only simplifies the system architecture by eliminating derogatory/irrelevant antecedent clauses, rules, and features but also maintains excellent performance.
2011-01-01
Background Existing methods of predicting DNA-binding proteins used valuable features of physicochemical properties to design support vector machine (SVM) based classifiers. Generally, selection of physicochemical properties and determination of their corresponding feature vectors rely mainly on known properties of binding mechanism and experience of designers. However, there exists a troublesome problem for designers that some different physicochemical properties have similar vectors of representing 20 amino acids and some closely related physicochemical properties have dissimilar vectors. Results This study proposes a systematic approach (named Auto-IDPCPs) to automatically identify a set of physicochemical and biochemical properties in the AAindex database to design SVM-based classifiers for predicting and analyzing DNA-binding domains/proteins. Auto-IDPCPs consists of 1) clustering 531 amino acid indices in AAindex into 20 clusters using a fuzzy c-means algorithm, 2) utilizing an efficient genetic algorithm based optimization method IBCGA to select an informative feature set of size m to represent sequences, and 3) analyzing the selected features to identify related physicochemical properties which may affect the binding mechanism of DNA-binding domains/proteins. The proposed Auto-IDPCPs identified m=22 features of properties belonging to five clusters for predicting DNA-binding domains with a five-fold cross-validation accuracy of 87.12%, which is promising compared with the accuracy of 86.62% of the existing method PSSM-400. For predicting DNA-binding sequences, the accuracy of 75.50% was obtained using m=28 features, where PSSM-400 has an accuracy of 74.22%. Auto-IDPCPs and PSSM-400 have accuracies of 80.73% and 82.81%, respectively, applied to an independent test data set of DNA-binding domains. Some typical physicochemical properties discovered are hydrophobicity, secondary structure, charge, solvent accessibility, polarity, flexibility, normalized Van Der Waals volume, pK (pK-C, pK-N, pK-COOH and pK-a(RCOOH)), etc. Conclusions The proposed approach Auto-IDPCPs would help designers to investigate informative physicochemical and biochemical properties by considering both prediction accuracy and analysis of binding mechanism simultaneously. The approach Auto-IDPCPs can be also applicable to predict and analyze other protein functions from sequences. PMID:21342579
Guillaume, Fabrice; Etienne, Yann
2015-03-01
Using two exclusion tasks, the present study examined how the ERP correlates of face recognition are affected by the nature of the information to be retrieved. Intrinsic (facial expression) and extrinsic (background scene) visual information were paired with face identity and constituted the exclusion criterion at test time. Although perceptual information had to be taken into account in both situations, the FN400 old-new effect was observed only for old target faces on the expression-exclusion task, whereas it was found for both old target and old non-target faces in the background-exclusion situation. These results reveal that the FN400, which is generally interpreted as a correlate of familiarity, was modulated by the retrieval of intra-item and intrinsic face information, but not by the retrieval of extrinsic information. The observed effects on the FN400 depended on the nature of the information to be retrieved and its relationship (unitization) to the recognition target. On the other hand, the parietal old-new effect (generally described as an ERP correlate of recollection) reflected the retrieval of both types of contextual features equivalently. The current findings are discussed in relation to recent controversies about the nature of the recognition processes reflected by the ERP correlates of face recognition. Copyright © 2015 Elsevier B.V. All rights reserved.
Neural dynamics underlying attentional orienting to auditory representations in short-term memory.
Backer, Kristina C; Binns, Malcolm A; Alain, Claude
2015-01-21
Sounds are ephemeral. Thus, coherent auditory perception depends on "hearing" back in time: retrospectively attending that which was lost externally but preserved in short-term memory (STM). Current theories of auditory attention assume that sound features are integrated into a perceptual object, that multiple objects can coexist in STM, and that attention can be deployed to an object in STM. Recording electroencephalography from humans, we tested these assumptions, elucidating feature-general and feature-specific neural correlates of auditory attention to STM. Alpha/beta oscillations and frontal and posterior event-related potentials indexed feature-general top-down attentional control to one of several coexisting auditory representations in STM. Particularly, task performance during attentional orienting was correlated with alpha/low-beta desynchronization (i.e., power suppression). However, attention to one feature could occur without simultaneous processing of the second feature of the representation. Therefore, auditory attention to memory relies on both feature-specific and feature-general neural dynamics. Copyright © 2015 the authors 0270-6474/15/351307-12$15.00/0.
Odor Recognition vs. Classification in Artificial Olfaction
NASA Astrophysics Data System (ADS)
Raman, Baranidharan; Hertz, Joshua; Benkstein, Kurt; Semancik, Steve
2011-09-01
Most studies in chemical sensing have focused on the problem of precise identification of chemical species that were exposed during the training phase (the recognition problem). However, generalization of training to predict the chemical composition of untrained gases based on their similarity with analytes in the training set (the classification problem) has received very limited attention. These two analytical tasks pose conflicting constraints on the system. While correct recognition requires detection of molecular features that are unique to an analyte, generalization to untrained chemicals requires detection of features that are common across a desired class of analytes. A simple solution that addresses both issues simultaneously can be obtained from biological olfaction, where the odor class and identity information are decoupled and extracted individually over time. Mimicking this approach, we proposed a hierarchical scheme that allowed initial discrimination between broad chemical classes (e.g. contains oxygen) followed by finer refinements using additional data into sub-classes (e.g. ketones vs. alcohols) and, eventually, specific compositions (e.g. ethanol vs. methanol) [1]. We validated this approach using an array of temperature-controlled chemiresistors. We demonstrated that a small set of training analytes is sufficient to allow generalization to novel chemicals and that the scheme provides robust categorization despite aging. Here, we provide further characterization of this approach.
Repetition blindness and illusory conjunctions: errors in binding visual types with visual tokens.
Kanwisher, N
1991-05-01
Repetition blindness (Kanwisher, 1986, 1987) has been defined as the failure to detect or recall repetitions of words presented in rapid serial visual presentation (RSVP). The experiments presented here suggest that repetition blindness (RB) is a more general visual phenomenon, and examine its relationship to feature integration theory (Treisman & Gelade, 1980). Experiment 1 shows RB for letters distributed through space, time, or both. Experiment 2 demonstrates RB for repeated colors in RSVP lists. In Experiments 3 and 4, RB was found for repeated letters and colors in spatial arrays. Experiment 5 provides evidence that the mental representations of discrete objects (called "visual tokens" here) that are necessary to detect visual repetitions (Kanwisher, 1987) are the same as the "object files" (Kahneman & Treisman, 1984) in which visual features are conjoined. In Experiment 6, repetition blindness for the second occurrence of a repeated letter resulted only when the first occurrence was attended to. The overall results suggest that a general dissociation between types and tokens in visual information processing can account for both repetition blindness and illusory conjunctions.
NASA Technical Reports Server (NTRS)
1985-01-01
A usable data base, the Pilot climate Data System (PCDS) is described. The PCDS is designed to be an interactive, easy-to-use, on-line generalized scientific information system. It efficiently provides uniform data catalogs; inventories, and access method, as well as manipulation and display tools for a large assortment of Earth, ocean and atmospheric data for the climate-related research community. Researchers can employ the PCDS to scan, manipulate, compare, display, and study climate parameters from diverse data sets. Software features, and applications of the PCDS are highlighted.
Instantaneous Assessment Of Athletic Performance Using High Speed Video
NASA Astrophysics Data System (ADS)
Hubbard, Mont; Alaways, LeRoy W.
1988-02-01
We describe the use of high speed video to provide quantitative assessment of motion in athletic performance. Besides the normal requirement for accuracy, an essential feature is that the information be provided rapidly enough so that it my serve as valuable feedback in the learning process. The general considerations which must be addressed in the development of such a computer based system are discussed. These ideas are illustrated specifically through the description of a prototype system which has been designed for the javelin throw.
Mehlhorn, Julia; Rehkaemper, Gerd
2017-01-01
Homing pigeons are known for their excellent homing ability, and their brains seem to be functionally adapted to homing. It is known that pigeons with navigational experience show a larger hippocampus and also a more lateralised brain than pigeons without navigational experience. So we hypothesized that experience may have an influence also on orientation ability. We examined two groups of pigeons (11 with navigational experience and 17 without) in a standard operant chamber with a touch screen monitor showing a 2-D schematic of a rectangular environment (as "geometric" information) and one uniquely shaped and colored feature in each corner (as "landmark" information). Pigeons were trained first for pecking on one of these features and then we examined their ability to encode geometric and landmark information in four tests by modifying the rectangular environment. All tests were done under binocular and monocular viewing to test hemispheric dominance. The number of pecks was counted for analysis. Results show that generally both groups orientate on the basis of landmarks and the geometry of environment, but landmark information was preferred. Pigeons with navigational experience did not perform better on the tests but showed a better conjunction of the different kinds of information. Significant differences between monocular and binocular viewing were detected particularly in pigeons without navigational experience on two tests with reduced information. Our data suggest that the conjunction of geometric and landmark information might be integrated after processing separately in each hemisphere and that this process is influenced by experience.
Automating annotation of information-giving for analysis of clinical conversation.
Mayfield, Elijah; Laws, M Barton; Wilson, Ira B; Penstein Rosé, Carolyn
2014-02-01
Coding of clinical communication for fine-grained features such as speech acts has produced a substantial literature. However, annotation by humans is laborious and expensive, limiting application of these methods. We aimed to show that through machine learning, computers could code certain categories of speech acts with sufficient reliability to make useful distinctions among clinical encounters. The data were transcripts of 415 routine outpatient visits of HIV patients which had previously been coded for speech acts using the Generalized Medical Interaction Analysis System (GMIAS); 50 had also been coded for larger scale features using the Comprehensive Analysis of the Structure of Encounters System (CASES). We aggregated selected speech acts into information-giving and requesting, then trained the machine to automatically annotate using logistic regression classification. We evaluated reliability by per-speech act accuracy. We used multiple regression to predict patient reports of communication quality from post-visit surveys using the patient and provider information-giving to information-requesting ratio (briefly, information-giving ratio) and patient gender. Automated coding produces moderate reliability with human coding (accuracy 71.2%, κ=0.57), with high correlation between machine and human prediction of the information-giving ratio (r=0.96). The regression significantly predicted four of five patient-reported measures of communication quality (r=0.263-0.344). The information-giving ratio is a useful and intuitive measure for predicting patient perception of provider-patient communication quality. These predictions can be made with automated annotation, which is a practical option for studying large collections of clinical encounters with objectivity, consistency, and low cost, providing greater opportunity for training and reflection for care providers.
Zahiri, Javad; Mohammad-Noori, Morteza; Ebrahimpour, Reza; Saadat, Samaneh; Bozorgmehr, Joseph H; Goldberg, Tatyana; Masoudi-Nejad, Ali
2014-12-01
Protein-protein interaction (PPI) detection is one of the central goals of functional genomics and systems biology. Knowledge about the nature of PPIs can help fill the widening gap between sequence information and functional annotations. Although experimental methods have produced valuable PPI data, they also suffer from significant limitations. Computational PPI prediction methods have attracted tremendous attentions. Despite considerable efforts, PPI prediction is still in its infancy in complex multicellular organisms such as humans. Here, we propose a novel ensemble learning method, LocFuse, which is useful in human PPI prediction. This method uses eight different genomic and proteomic features along with four types of different classifiers. The prediction performance of this classifier selection method was found to be considerably better than methods employed hitherto. This confirms the complex nature of the PPI prediction problem and also the necessity of using biological information for classifier fusion. The LocFuse is available at: http://lbb.ut.ac.ir/Download/LBBsoft/LocFuse. The results revealed that if we divide proteome space according to the cellular localization of proteins, then the utility of some classifiers in PPI prediction can be improved. Therefore, to predict the interaction for any given protein pair, we can select the most accurate classifier with regard to the cellular localization information. Based on the results, we can say that the importance of different features for PPI prediction varies between differently localized proteins; however in general, our novel features, which were extracted from position-specific scoring matrices (PSSMs), are the most important ones and the Random Forest (RF) classifier performs best in most cases. LocFuse was developed with a user-friendly graphic interface and it is freely available for Linux, Mac OSX and MS Windows operating systems. Copyright © 2014 Elsevier Inc. All rights reserved.
Klabjan, Diego; Jonnalagadda, Siddhartha Reddy
2016-01-01
Background Community-based question answering (CQA) sites play an important role in addressing health information needs. However, a significant number of posted questions remain unanswered. Automatically answering the posted questions can provide a useful source of information for Web-based health communities. Objective In this study, we developed an algorithm to automatically answer health-related questions based on past questions and answers (QA). We also aimed to understand information embedded within Web-based health content that are good features in identifying valid answers. Methods Our proposed algorithm uses information retrieval techniques to identify candidate answers from resolved QA. To rank these candidates, we implemented a semi-supervised leaning algorithm that extracts the best answer to a question. We assessed this approach on a curated corpus from Yahoo! Answers and compared against a rule-based string similarity baseline. Results On our dataset, the semi-supervised learning algorithm has an accuracy of 86.2%. Unified medical language system–based (health related) features used in the model enhance the algorithm’s performance by proximately 8%. A reasonably high rate of accuracy is obtained given that the data are considerably noisy. Important features distinguishing a valid answer from an invalid answer include text length, number of stop words contained in a test question, a distance between the test question and other questions in the corpus, and a number of overlapping health-related terms between questions. Conclusions Overall, our automated QA system based on historical QA pairs is shown to be effective according to the dataset in this case study. It is developed for general use in the health care domain, which can also be applied to other CQA sites. PMID:27485666
NASA Technical Reports Server (NTRS)
Hannah, J. W.; Thomas, G. L.; Esparza, F.
1975-01-01
A land use map of Orange County, Florida was prepared from EREP photography while LANDSAT and EREP multispectral scanner data were used to provide more detailed information on Orlando and its suburbs. The generalized maps were prepared by tracing the patterns on an overlay, using an enlarging viewer. Digital analysis of the multispectral scanner data was basically the maximum likelihood classification method with training sample input and computer printer mapping of the results. Urban features delineated by the maps are discussed. It is concluded that computer classification, accompanied by human interpretation and manual simplification can produce land use maps which are useful on a regional, county, and city basis.
,; Spall, Henry; Schnabel, Diane C.
1989-01-01
Earthquakes and Volcanoes is published bimonthly by the U.S. Geological Survey to provide current information on earthquakes and seismology, volcanoes, and related natural hazards of interest to both generalized and specialized readers. The Secretary of the Interior has determined that the publication of this periodical is necessary in the transaction of the public business required by law of this Department. Use of funds for printing this periodical has been approved by the Office of Management and Budget through June 30, 1989. Any use of trade, product, or firm names is for descriptive purposes only and does not imply endorsement by the U.S. Government.
Fusion of 3D laser scanner and depth images for obstacle recognition in mobile applications
NASA Astrophysics Data System (ADS)
Budzan, Sebastian; Kasprzyk, Jerzy
2016-02-01
The problem of obstacle detection and recognition or, generally, scene mapping is one of the most investigated problems in computer vision, especially in mobile applications. In this paper a fused optical system using depth information with color images gathered from the Microsoft Kinect sensor and 3D laser range scanner data is proposed for obstacle detection and ground estimation in real-time mobile systems. The algorithm consists of feature extraction in the laser range images, processing of the depth information from the Kinect sensor, fusion of the sensor information, and classification of the data into two separate categories: road and obstacle. Exemplary results are presented and it is shown that fusion of information gathered from different sources increases the effectiveness of the obstacle detection in different scenarios, and it can be used successfully for road surface mapping.
Holographic control of information and dynamical topology change for composite open quantum systems
NASA Astrophysics Data System (ADS)
Aref'eva, I. Ya.; Volovich, I. V.; Inozemcev, O. V.
2017-12-01
We analyze how the compositeness of a system affects the characteristic time of equilibration. We study the dynamics of open composite quantum systems strongly coupled to the environment after a quantum perturbation accompanied by nonequilibrium heating. We use a holographic description of the evolution of entanglement entropy. The nonsmooth character of the evolution with holographic entanglement is a general feature of composite systems, which demonstrate a dynamical change of topology in the bulk space and a jumplike velocity change of entanglement entropy propagation. Moreover, the number of jumps depends on the system configuration and especially on the number of composite parts. The evolution of the mutual information of two composite systems inherits these jumps. We present a detailed study of the mutual information for two subsystems with one of them being bipartite. We find five qualitatively different types of behavior of the mutual information dynamics and indicate the corresponding regions of the system parameters.
NASA Astrophysics Data System (ADS)
Carpenter, P. W.; Green, P. N.
1997-12-01
The literature on high-speed Coanda flows and its applications is reviewed. The lack of basic information for design engineers is noted. The present paper is based on an investigation of the aeroacoustics and aerodynamics of the high-speed Coanda flow that is formed when a supersonic jet issues from a radial nozzle and adheres to a tulip-shaped body of revolution. Schlieren and other flow visualization techniques together with theoretical methods are used to reveal the various features of this complex flow field. The acoustic characteristics were obtained from measurements with an array of microphones in an anechoic chamber. The emphasis is placed on those features of the aerodynamics and aeroacoustics which may be of general interest.
Gravity dual for a model of perception
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nakayama, Yu, E-mail: nakayama@berkeley.edu
2011-01-15
One of the salient features of human perception is its invariance under dilatation in addition to the Euclidean group, but its non-invariance under special conformal transformation. We investigate a holographic approach to the information processing in image discrimination with this feature. We claim that a strongly coupled analogue of the statistical model proposed by Bialek and Zee can be holographically realized in scale invariant but non-conformal Euclidean geometries. We identify the Bayesian probability distribution of our generalized Bialek-Zee model with the GKPW partition function of the dual gravitational system. We provide a concrete example of the geometric configuration based onmore » a vector condensation model coupled with the Euclidean Einstein-Hilbert action. From the proposed geometry, we study sample correlation functions to compute the Bayesian probability distribution.« less
Cho, HyunGi; Yeon, Suyong; Choi, Hyunga; Doh, Nakju
2018-01-01
In a group of general geometric primitives, plane-based features are widely used for indoor localization because of their robustness against noises. However, a lack of linearly independent planes may lead to a non-trivial estimation. This in return can cause a degenerate state from which all states cannot be estimated. To solve this problem, this paper first proposed a degeneracy detection method. A compensation method that could fix orientations by projecting an inertial measurement unit’s (IMU) information was then explained. Experiments were conducted using an IMU-Kinect v2 integrated sensor system prone to fall into degenerate cases owing to its narrow field-of-view. Results showed that the proposed framework could enhance map accuracy by successful detection and compensation of degenerated orientations. PMID:29565287
Ultraviolet reflectance properties of asteroids
NASA Astrophysics Data System (ADS)
Butterworth, P. S.; Meadows, A. J.
1985-05-01
An analysis of the UV spectra of 28 asteroids obtained with the Internal Ultraviolet Explorer (IUE) satellite is presented. The spectra lie within the range 2100-3200 A. The results are examined in terms of both asteroid classification and of current ideas concerning the surface mineralogy of asteroids. For all the asteroids examined, UV reflectivity declines approximately linearly toward shorter wavelengths. In general, the same taxonomic groups are seen in the UV as in the visible and IR, although there is some evidence for asteroids with anomalous UV properties and for UV subclasses within the S class. No mineral absorption features are reported of strength similar to the strongest features in the visible and IR regions, but a number of shallow absorptions do occur and may provide valuable information on the surface composition of many asteroids.
Classification of Aerial Photogrammetric 3d Point Clouds
NASA Astrophysics Data System (ADS)
Becker, C.; Häni, N.; Rosinskaya, E.; d'Angelo, E.; Strecha, C.
2017-05-01
We present a powerful method to extract per-point semantic class labels from aerial photogrammetry data. Labelling this kind of data is important for tasks such as environmental modelling, object classification and scene understanding. Unlike previous point cloud classification methods that rely exclusively on geometric features, we show that incorporating color information yields a significant increase in accuracy in detecting semantic classes. We test our classification method on three real-world photogrammetry datasets that were generated with Pix4Dmapper Pro, and with varying point densities. We show that off-the-shelf machine learning techniques coupled with our new features allow us to train highly accurate classifiers that generalize well to unseen data, processing point clouds containing 10 million points in less than 3 minutes on a desktop computer.
The semiology of febrile seizures: Focal features are frequent.
Takasu, Michihiko; Kubota, Tetsuo; Tsuji, Takeshi; Kurahashi, Hirokazu; Numoto, Shingo; Watanabe, Kazuyoshi; Okumura, Akihisa
2017-08-01
To clarify the semiology of febrile seizures (FS) and to determine the frequency of FS with symptoms suggestive of focal onset. FS symptoms in children were reported within 24h of seizure onset by the parents using a structured questionnaire consisting principally of closed-ended questions. We focused on events at seizure commencement, including changes in behavior and facial expression, and ocular and oral symptoms. We also investigated the autonomic and motor symptoms developing during seizures. The presence or absence of focal and limbic features was determined for each patient. The associations of certain focal and limbic features with patient characteristics were assessed. Information was obtained on FS in 106 children. Various events were recorded at seizure commencement. Behavioral changes were observed in 35 children, changes in facial expression in 53, ocular symptoms in 78, and oral symptoms in 90. In terms of events during seizures, autonomic symptoms were recognized in 78, and convulsive motor symptoms were recognized in 68 children. Focal features were evident in 81 children; 38 children had two or more such features. Limbic features were observed in 44 children, 9 of whom had two or more such features. There was no significant relationship between any patient characteristic and the numbers of focal or limbic features. The semiology of FS varied widely among children, and symptoms suggestive of focal onset were frequent. FS of focal onset may be more common than is generally thought. Copyright © 2017 Elsevier Inc. All rights reserved.
General anesthesia in cardiac surgery: a review of drugs and practices.
Alwardt, Cory M; Redford, Daniel; Larson, Douglas F
2005-06-01
General anesthesia is defined as complete anesthesia affecting the entire body with loss of consciousness, analgesia, amnesia, and muscle relaxation. There is a wide spectrum of agents able to partially or completely induce general anesthesia. Presently, there is not a single universally accepted technique for anesthetic management during cardiac surgery. Instead, the drugs and combinations of drugs used are derived from the pathophysiologic state of the patient and individual preference and experience of the anesthesiologist. According to the definition of general anesthesia, current practices consist of four main components: hypnosis, analgesia, amnesia, and muscle relaxation. Although many of the agents highlighted in this review are capable of producing more than one of these effects, it is logical that drugs producing these effects are given in combination to achieve the most beneficial effect. This review features a discussion of currently used anesthetic drugs and clinical practices of general anesthesia during cardiac surgery. The information in this particular review is derived from textbooks, current literature, and personal experience, and is designed as a general overview of anesthesia during cardiac surgery.
Heteromodal Cortical Areas Encode Sensory-Motor Features of Word Meaning.
Fernandino, Leonardo; Humphries, Colin J; Conant, Lisa L; Seidenberg, Mark S; Binder, Jeffrey R
2016-09-21
The capacity to process information in conceptual form is a fundamental aspect of human cognition, yet little is known about how this type of information is encoded in the brain. Although the role of sensory and motor cortical areas has been a focus of recent debate, neuroimaging studies of concept representation consistently implicate a network of heteromodal areas that seem to support concept retrieval in general rather than knowledge related to any particular sensory-motor content. We used predictive machine learning on fMRI data to investigate the hypothesis that cortical areas in this "general semantic network" (GSN) encode multimodal information derived from basic sensory-motor processes, possibly functioning as convergence-divergence zones for distributed concept representation. An encoding model based on five conceptual attributes directly related to sensory-motor experience (sound, color, shape, manipulability, and visual motion) was used to predict brain activation patterns associated with individual lexical concepts in a semantic decision task. When the analysis was restricted to voxels in the GSN, the model was able to identify the activation patterns corresponding to individual concrete concepts significantly above chance. In contrast, a model based on five perceptual attributes of the word form performed at chance level. This pattern was reversed when the analysis was restricted to areas involved in the perceptual analysis of written word forms. These results indicate that heteromodal areas involved in semantic processing encode information about the relative importance of different sensory-motor attributes of concepts, possibly by storing particular combinations of sensory and motor features. The present study used a predictive encoding model of word semantics to decode conceptual information from neural activity in heteromodal cortical areas. The model is based on five sensory-motor attributes of word meaning (color, shape, sound, visual motion, and manipulability) and encodes the relative importance of each attribute to the meaning of a word. This is the first demonstration that heteromodal areas involved in semantic processing can discriminate between different concepts based on sensory-motor information alone. This finding indicates that the brain represents concepts as multimodal combinations of sensory and motor representations. Copyright © 2016 the authors 0270-6474/16/369763-07$15.00/0.
Emphasizing Social Features in Information Portals: Effects on New Member Engagement
Sharma, Nikhil; Butler, Brian S.; Irwin, Jeannie; Spallek, Heiko
2013-01-01
Many information portals are adding social features with hopes of enhancing the overall user experience. Invitations to join and welcome pages that highlight these social features are expected to encourage use and participation. While this approach is widespread and seems plausible, the effect of providing and highlighting social features remains to be tested. We studied the effects of emphasizing social features on users' response to invitations, their decisions to join, their willingness to provide profile information, and their engagement with the portal's social features. The results of a quasi-experiment found no significant effect of social emphasis in invitations on receivers' responsiveness. However, users receiving invitations highlighting social benefits were less likely to join the portal and provide profile information. Social emphasis in the initial welcome page for the site also was found to have a significant effect on whether individuals joined the portal, how much profile information they provided and shared, and how much they engaged with social features on the site. Unexpectedly, users who were welcomed in a social manner were less likely to join and provided less profile information; they also were less likely to engage with social features of the portal. This suggests that even in online contexts where social activity is an increasingly common feature, highlighting the presence of social features may not always be the optimal presentation strategy. PMID:23626487
Plastid: nucleotide-resolution analysis of next-generation sequencing and genomics data.
Dunn, Joshua G; Weissman, Jonathan S
2016-11-22
Next-generation sequencing (NGS) informs many biological questions with unprecedented depth and nucleotide resolution. These assays have created a need for analytical tools that enable users to manipulate data nucleotide-by-nucleotide robustly and easily. Furthermore, because many NGS assays encode information jointly within multiple properties of read alignments - for example, in ribosome profiling, the locations of ribosomes are jointly encoded in alignment coordinates and length - analytical tools are often required to extract the biological meaning from the alignments before analysis. Many assay-specific pipelines exist for this purpose, but there remains a need for user-friendly, generalized, nucleotide-resolution tools that are not limited to specific experimental regimes or analytical workflows. Plastid is a Python library designed specifically for nucleotide-resolution analysis of genomics and NGS data. As such, Plastid is designed to extract assay-specific information from read alignments while retaining generality and extensibility to novel NGS assays. Plastid represents NGS and other biological data as arrays of values associated with genomic or transcriptomic positions, and contains configurable tools to convert data from a variety of sources to such arrays. Plastid also includes numerous tools to manipulate even discontinuous genomic features, such as spliced transcripts, with nucleotide precision. Plastid automatically handles conversion between genomic and feature-centric coordinates, accounting for splicing and strand, freeing users of burdensome accounting. Finally, Plastid's data models use consistent and familiar biological idioms, enabling even beginners to develop sophisticated analytical workflows with minimal effort. Plastid is a versatile toolkit that has been used to analyze data from multiple NGS assays, including RNA-seq, ribosome profiling, and DMS-seq. It forms the genomic engine of our ORF annotation tool, ORF-RATER, and is readily adapted to novel NGS assays. Examples, tutorials, and extensive documentation can be found at https://plastid.readthedocs.io .
NASA Astrophysics Data System (ADS)
Demir, I.; Krajewski, W. F.
2014-12-01
Recent advances in internet and cyberinfrastucture technologies have provided the capability to understand the hydrological and meteorological systems at space and time scales that are critical for making accurate understanding and prediction of flooding, and emergency preparedness. A novel example of a cyberinfrastructure platform for flood preparedness and response is the Iowa Flood Center's Iowa Flood Information System (IFIS). IFIS is a one-stop web-platform to access community-based flood conditions, forecasts, visualizations, inundation maps and flood-related data, information, and applications. An enormous volume of real-time observational data from a variety of sensors and remote sensing resources (radars, rain gauges, stream sensors, etc.) and complex flood inundation models are staged on a user-friendly maps environment that is accessible to the general public. IFIS has developed into a very successful tool used by agencies, decision-makers, and the general public throughout Iowa to better understand their local watershed and their personal and community flood risk, and to monitor local stream and river levels. IFIS helps communities make better-informed decisions on the occurrence of floods, and alerts communities in advance to help minimize flood damages. IFIS is widely used by general public in Iowa and the Midwest region with over 120,000 unique users, and became main source of information for many newspapers and TV stations in Iowa. IFIS has features for general public to improve emergency preparedness, and for decision makers to support emergency response and recovery efforts. IFIS is also a great platform for educators and local authorities to educate students and public on flooding with games, easy to use interactive environment, and data rich system.
Hyperspectral data discrimination methods
NASA Astrophysics Data System (ADS)
Casasent, David P.; Chen, Xuewen
2000-12-01
Hyperspectral data provides spectral response information that provides detailed chemical, moisture, and other description of constituent parts of an item. These new sensor data are useful in USDA product inspection. However, such data introduce problems such as the curse of dimensionality, the need to reduce the number of features used to accommodate realistic small training set sizes, and the need to employ discriminatory features and still achieve good generalization (comparable training and test set performance). Several two-step methods are compared to a new and preferable single-step spectral decomposition algorithm. Initial results on hyperspectral data for good/bad almonds and for good/bad (aflatoxin infested) corn kernels are presented. The hyperspectral application addressed differs greatly from prior USDA work (PLS) in which the level of a specific channel constituent in food was estimated. A validation set (separate from the test set) is used in selecting algorithm parameters. Threshold parameters are varied to select the best Pc operating point. Initial results show that nonlinear features yield improved performance.
NASA Technical Reports Server (NTRS)
Collins, R. J. (Principal Investigator); Mccown, F. P.; Stonis, L. P.; Petzel, G. J.; Everett, J. R.
1974-01-01
The author has identified the following significant results. ERTS-1 data give exploration geologists a new perspective for looking at the earth. The data are excellent for interpreting regional lithologic and structural relationships and quickly directing attention to areas of greatest exploration interest. Information derived from ERTS data useful for petroleum exploration include: linear features, general lithologic distribution, identification of various anomalous features, some details of structures controlling hydrocarbon accumulation, overall structural relationships, and the regional context of the exploration province. Many anomalies (particularly geomorphic anomalies) correlate with known features of petroleum exploration interest. Linears interpreted from the imagery that were checked in the field correlate with fractures. Bands 5 and 7 and color composite imagery acquired during the periods of maximum and minimum vegetation vigor are best for geologic interpretation. Preliminary analysis indicates that use of ERTS imagery can substantially reduce the cost of petroleum exploration in relatively unexplored areas.
Rolling Bearing Fault Diagnosis Based on an Improved HTT Transform
Tang, Guiji; Tian, Tian; Zhou, Chong
2018-01-01
When rolling bearing failure occurs, vibration signals generally contain different signal components, such as impulsive fault feature signals, background noise and harmonic interference signals. One of the most challenging aspects of rolling bearing fault diagnosis is how to inhibit noise and harmonic interference signals, while enhancing impulsive fault feature signals. This paper presents a novel bearing fault diagnosis method, namely an improved Hilbert time–time (IHTT) transform, by combining a Hilbert time–time (HTT) transform with principal component analysis (PCA). Firstly, the HTT transform was performed on vibration signals to derive a HTT transform matrix. Then, PCA was employed to de-noise the HTT transform matrix in order to improve the robustness of the HTT transform. Finally, the diagonal time series of the de-noised HTT transform matrix was extracted as the enhanced impulsive fault feature signal and the contained fault characteristic information was identified through further analyses of amplitude and envelope spectrums. Both simulated and experimental analyses validated the superiority of the presented method for detecting bearing failures. PMID:29662013
NASA Technical Reports Server (NTRS)
Burgess, Malcolm A.; Thomas, Rickey P.
2004-01-01
This experiment investigated improvements to cockpit weather displays to better support the hazardous weather avoidance decision-making of general aviation pilots. Forty-eight general aviation pilots were divided into three equal groups and presented with a simulated flight scenario involving embedded convective activity. The control group had access to conventional sources of pre-flight and in-flight weather products. The two treatment groups were provided with a weather display that presented NEXRAD mosaic images, graphic depiction of METARs, and text METARs. One treatment group used a NEXRAD image looping feature and the second group used the National Convective Weather Forecast (NCWF) product overlaid on the NEXRAD display. Both of the treatment displays provided a significant increase in situation awareness but, they provided incomplete information required to deal with hazardous convective weather conditions, and would require substantial pilot training to permit their safe and effective use.
NASA Technical Reports Server (NTRS)
Landgrebe, D.
1974-01-01
A broad study is described to evaluate a set of machine analysis and processing techniques applied to ERTS-1 data. Based on the analysis results in urban land use analysis and soil association mapping together with previously reported results in general earth surface feature identification and crop species classification, a profile of general applicability of this procedure is beginning to emerge. Put in the hands of a user who knows well the information needed from the data and also is familiar with the region to be analyzed it appears that significantly useful information can be generated by these methods. When supported by preprocessing techniques such as the geometric correction and temporal registration capabilities, final products readily useable by user agencies appear possible. In parallel with application, through further research, there is much potential for further development of these techniques both with regard to providing higher performance and in new situations not yet studied.
Shah, Sachin D.; Smith, Bruce D.; Clark, Allan K.; Payne, Jason
2008-01-01
In August 2007, the U.S. Geological Survey, in cooperation with the San Antonio Water System, did a hydrogeologic and geophysical investigation to characterize the hydrostratigraphy (hydrostratigraphic zones) and also the hydrogeologic features (karst features such as sinkholes and caves) of the Edwards aquifer in a 16-square-kilometer area of northeastern Bexar County, Texas, undergoing urban development. Existing hydrostratigraphic information, enhanced by local-scale geologic mapping in the area, and surface geophysics were used to associate ranges of electrical resistivities obtained from capacitively coupled (CC) resistivity surveys, frequency-domain electromagnetic (FDEM) surveys, time-domain electromagnetic (TDEM) soundings, and two-dimensional direct-current (2D-DC) resistivity surveys with each of seven hydrostratigraphic zones (equivalent to members of the Kainer and Person Formations) of the Edwards aquifer. The principal finding of this investigation is the relation between electrical resistivity and the contacts between the hydrostratigraphic zones of the Edwards aquifer and the underlying Trinity aquifer in the area. In general, the TDEM data indicate a two-layer model in which an electrical conductor underlies an electrical resistor, which is consistent with the Trinity aquifer (conductor) underlying the Edwards aquifer (resistor). TDEM data also show the plane of Bat Cave fault, a well-known fault in the area, to be associated with a local, nearly vertical zone of low resistivity that provides evidence, although not definitive, for Bat Cave fault functioning as a flow barrier, at least locally. In general, the CC resistivity, FDEM survey, and 2D-DC resistivity survey data show a sharp electrical contrast from north to south, changing from high resistivity to low resistivity across Bat Cave fault as well as possible karst features in the study area. Interpreted karst features that show relatively low resistivity within a relatively high-resistivity area likely are attributable to clay or soil filling a sinkhole. In general, faults are inferred where lithologic incongruity indicates possible displacement. Along most inferred faults, displacement was not sufficient to place different members of the Kainer or Person Formations (hydrostratigraphic zones) adjacent across the inferred fault plane. In general, the Kainer Formation (hydrostratigraphic zones V through VIII) has a higher resistivity than the Person Formation (hydrostratigraphic zones II through IV). Although resistivity variations from the CC resistivity, FDEM, and 2D-DC resistivity surveys, with mapping information, were sufficient to allow surface mapping of the lateral extent of hydrostratigraphic zones in places, resistivity variations from TDEM data were not sufficient to allow vertical delineation of hydrostratigraphic zones; however, the Edwards aquifer-Trinity aquifer contact could be identified from the TDEM data.
Surprise! Infants consider possible bases of generalization for a single input example.
Gerken, LouAnn; Dawson, Colin; Chatila, Razanne; Tenenbaum, Josh
2015-01-01
Infants have been shown to generalize from a small number of input examples. However, existing studies allow two possible means of generalization. One is via a process of noting similarities shared by several examples. Alternatively, generalization may reflect an implicit desire to explain the input. The latter view suggests that generalization might occur when even a single input example is surprising, given the learner's current model of the domain. To test the possibility that infants are able to generalize based on a single example, we familiarized 9-month-olds with a single three-syllable input example that contained either one surprising feature (syllable repetition, Experiment 1) or two features (repetition and a rare syllable, Experiment 2). In both experiments, infants generalized only to new strings that maintained all of the surprising features from familiarization. This research suggests that surprise can promote very rapid generalization. © 2014 John Wiley & Sons Ltd.
A novel feature extraction approach for microarray data based on multi-algorithm fusion
Jiang, Zhu; Xu, Rong
2015-01-01
Feature extraction is one of the most important and effective method to reduce dimension in data mining, with emerging of high dimensional data such as microarray gene expression data. Feature extraction for gene selection, mainly serves two purposes. One is to identify certain disease-related genes. The other is to find a compact set of discriminative genes to build a pattern classifier with reduced complexity and improved generalization capabilities. Depending on the purpose of gene selection, two types of feature extraction algorithms including ranking-based feature extraction and set-based feature extraction are employed in microarray gene expression data analysis. In ranking-based feature extraction, features are evaluated on an individual basis, without considering inter-relationship between features in general, while set-based feature extraction evaluates features based on their role in a feature set by taking into account dependency between features. Just as learning methods, feature extraction has a problem in its generalization ability, which is robustness. However, the issue of robustness is often overlooked in feature extraction. In order to improve the accuracy and robustness of feature extraction for microarray data, a novel approach based on multi-algorithm fusion is proposed. By fusing different types of feature extraction algorithms to select the feature from the samples set, the proposed approach is able to improve feature extraction performance. The new approach is tested against gene expression dataset including Colon cancer data, CNS data, DLBCL data, and Leukemia data. The testing results show that the performance of this algorithm is better than existing solutions. PMID:25780277
A novel feature extraction approach for microarray data based on multi-algorithm fusion.
Jiang, Zhu; Xu, Rong
2015-01-01
Feature extraction is one of the most important and effective method to reduce dimension in data mining, with emerging of high dimensional data such as microarray gene expression data. Feature extraction for gene selection, mainly serves two purposes. One is to identify certain disease-related genes. The other is to find a compact set of discriminative genes to build a pattern classifier with reduced complexity and improved generalization capabilities. Depending on the purpose of gene selection, two types of feature extraction algorithms including ranking-based feature extraction and set-based feature extraction are employed in microarray gene expression data analysis. In ranking-based feature extraction, features are evaluated on an individual basis, without considering inter-relationship between features in general, while set-based feature extraction evaluates features based on their role in a feature set by taking into account dependency between features. Just as learning methods, feature extraction has a problem in its generalization ability, which is robustness. However, the issue of robustness is often overlooked in feature extraction. In order to improve the accuracy and robustness of feature extraction for microarray data, a novel approach based on multi-algorithm fusion is proposed. By fusing different types of feature extraction algorithms to select the feature from the samples set, the proposed approach is able to improve feature extraction performance. The new approach is tested against gene expression dataset including Colon cancer data, CNS data, DLBCL data, and Leukemia data. The testing results show that the performance of this algorithm is better than existing solutions.
Yeh, Hsiang J.; Guindani, Michele; Vannucci, Marina; Haneef, Zulfi; Stern, John M.
2018-01-01
Estimation of functional connectivity (FC) has become an increasingly powerful tool for investigating healthy and abnormal brain function. Static connectivity, in particular, has played a large part in guiding conclusions from the majority of resting-state functional MRI studies. However, accumulating evidence points to the presence of temporal fluctuations in FC, leading to increasing interest in estimating FC as a dynamic quantity. One central issue that has arisen in this new view of connectivity is the dramatic increase in complexity caused by dynamic functional connectivity (dFC) estimation. To computationally handle this increased complexity, a limited set of dFC properties, primarily the mean and variance, have generally been considered. Additionally, it remains unclear how to integrate the increased information from dFC into pattern recognition techniques for subject-level prediction. In this study, we propose an approach to address these two issues based on a large number of previously unexplored temporal and spectral features of dynamic functional connectivity. A Generalized Autoregressive Conditional Heteroskedasticity (GARCH) model is used to estimate time-varying patterns of functional connectivity between resting-state networks. Time-frequency analysis is then performed on dFC estimates, and a large number of previously unexplored temporal and spectral features drawn from signal processing literature are extracted for dFC estimates. We apply the investigated features to two neurologic populations of interest, healthy controls and patients with temporal lobe epilepsy, and show that the proposed approach leads to substantial increases in predictive performance compared to both traditional estimates of static connectivity as well as current approaches to dFC. Variable importance is assessed and shows that there are several quantities that can be extracted from dFC signal which are more informative than the traditional mean or variance of dFC. This work illuminates many previously unexplored facets of the dynamic properties of functional connectivity between resting-state networks, and provides a platform for dynamic functional connectivity analysis that facilitates its usage as an investigative measure for healthy as well as abnormal brain function. PMID:29320526
Locus and persistence of capacity limitations in visual information processing.
Kleiss, J A; Lane, D M
1986-05-01
Although there is considerable evidence that stimuli such as digits and letters are extensively processed in parallel and without capacity limitations, recent data suggest that only the features of stimuli are processed in parallel. In an attempt to reconcile this discrepancy, we used the simultaneous/successive detection paradigm with stimuli from experiments indicating parallel processing and with stimuli from experiments indicating that only features can be processed in parallel. In Experiment 1, large differences between simultaneous and successive presentations were obtained with an R target among P and Q distractors and among P and B distractors, but not with digit targets among letter distractors. As predicted by the feature integration theory of attention, false-alarm rates in the simultaneous condition were much higher than in the successive condition with the R/PQ stimuli. In Experiment 2, the possibility that attention is required for any difficult discrimination was ruled out as an explanation of the discrepancy between the digit/letter results and the R/PQ and R/PB results. Experiment 3A replicated the R/PQ and R/PB results of Experiment 1, and Experiment 3B extended these findings to a new set of stimuli. In Experiment 4, we found that large amounts of consistent practice did not generally eliminate capacity limitations. From this series of experiments we strongly conclude that the notion of capacity-free letter perception has limited generality.
NASA Astrophysics Data System (ADS)
Wang, Yongzhi; Ma, Yuqing; Zhu, A.-xing; Zhao, Hui; Liao, Lixia
2018-05-01
Facade features represent segmentations of building surfaces and can serve as a building framework. Extracting facade features from three-dimensional (3D) point cloud data (3D PCD) is an efficient method for 3D building modeling. By combining the advantages of 3D PCD and two-dimensional optical images, this study describes the creation of a highly accurate building facade feature extraction method from 3D PCD with a focus on structural information. The new extraction method involves three major steps: image feature extraction, exploration of the mapping method between the image features and 3D PCD, and optimization of the initial 3D PCD facade features considering structural information. Results show that the new method can extract the 3D PCD facade features of buildings more accurately and continuously. The new method is validated using a case study. In addition, the effectiveness of the new method is demonstrated by comparing it with the range image-extraction method and the optical image-extraction method in the absence of structural information. The 3D PCD facade features extracted by the new method can be applied in many fields, such as 3D building modeling and building information modeling.
Hierarchical Feedback Modules and Reaction Hubs in Cell Signaling Networks
Xu, Jianfeng; Lan, Yueheng
2015-01-01
Despite much effort, identification of modular structures and study of their organizing and functional roles remain a formidable challenge in molecular systems biology, which, however, is essential in reaching a systematic understanding of large-scale cell regulation networks and hence gaining capacity of exerting effective interference to cell activity. Combining graph theoretic methods with available dynamics information, we successfully retrieved multiple feedback modules of three important signaling networks. These feedbacks are structurally arranged in a hierarchical way and dynamically produce layered temporal profiles of output signals. We found that global and local feedbacks act in very different ways and on distinct features of the information flow conveyed by signal transduction but work highly coordinately to implement specific biological functions. The redundancy embodied with multiple signal-relaying channels and feedback controls bestow great robustness and the reaction hubs seated at junctions of different paths announce their paramount importance through exquisite parameter management. The current investigation reveals intriguing general features of the organization of cell signaling networks and their relevance to biological function, which may find interesting applications in analysis, design and control of bio-networks. PMID:25951347
Map and database of Quaternary faults in Venezuela and its offshore regions
Audemard, F.A.; Machette, M.N.; Cox, J.W.; Dart, R.L.; Haller, K.M.
2000-01-01
As part of the International Lithosphere Program’s “World Map of Major Active Faults,” the U.S. Geological Survey is assisting in the compilation of a series of digital maps of Quaternary faults and folds in Western Hemisphere countries. The maps show the locations, ages, and activity rates of major earthquake-related features such as faults and fault-related folds. They are accompanied by databases that describe these features and document current information on their activity in the Quaternary. The project is a key part of the Global Seismic Hazards Assessment Program (ILP Project II-0) for the International Decade for Natural Hazard Disaster Reduction.The project is sponsored by the International Lithosphere Program and funded by the USGS’s National Earthquake Hazards Reduction Program. The primary elements of the project are general supervision and interpretation of geologic/tectonic information, data compilation and entry for fault catalog, database design and management, and digitization and manipulation of data in †ARCINFO. For the compilation of data, we engaged experts in Quaternary faulting, neotectonics, paleoseismology, and seismology.
Making YOHKOH SXT Images Available to the Public: The YOHKOH Public Outreach Project
NASA Astrophysics Data System (ADS)
Larson, M. B.; McKenzie, D.; Slater, T.; Acton, L.; Alexander, D.; Freeland, S.; Lemen, J.; Metcalf, T.
1999-05-01
The NASA funded Yohkoh Public Outreach Project (YPOP) provides public access to high quality Yohkoh SXT data via the World Wide Web. The products of this effort are available to the scientific research community, K-12 schools, and informal education centers including planetaria, museums, and libraries. The project utilizes the intrinsic excitement of the SXT data, and in particular the SXT movies, to develop science learning tools and classroom activities. The WWW site at URL: http://solar.physics.montana.edu/YPOP/ uses a movie theater theme to highlight available Yohkoh movies in a format that is entertaining and inviting to non-scientists. The site features informational tours of the Sun as a star, the solar magnetic field, the internal structure and the Sun's general features. The on-line Solar Classroom has proven very popular, showcasing hand-on activities about image filtering, the solar cycle, satellite orbits, image processing, construction of a model Yohkoh satellite, solar rotation, measuring sunspots and building a portable sundial. The YPOP Guestbook has been helpful in evaluating the usefulness of the site with over 300 detailed comments to date.
A general UNIX interface for biocomputing and network information retrieval software.
Kiong, B K; Tan, T W
1993-10-01
We describe a UNIX program, HYBROW, which can integrate without modification a wide range of UNIX biocomputing and network information retrieval software. HYBROW works in conjunction with a separate set of ASCII files containing embedded hypertext-like links. The program operates like a hypertext browser featuring five basic links: file link, execute-only link, execute-display link, directory-browse link and field-filling link. Useful features of the interface may be developed using combinations of these links with simple shell scripts and examples of these are briefly described. The system manager who supports biocomputing users should find the program easy to maintain, and useful in assisting new and infrequent users; it is also simple to incorporate new programs. Moreover, the individual user can customize the interface, create dynamic menus, hypertext a document, invoke shell scripts and new programs simply with a basic understanding of the UNIX operating system and any text editor. This program was written in C language and uses the UNIX curses and termcap libraries. It is freely available as a tar compressed file (by anonymous FTP from nuscc.nus.sg).
Task relevance modulates the cortical representation of feature conjunctions in the target template.
Reeder, Reshanne R; Hanke, Michael; Pollmann, Stefan
2017-07-03
Little is known about the cortical regions involved in representing task-related content in preparation for visual task performance. Here we used representational similarity analysis (RSA) to investigate the BOLD response pattern similarity between task relevant and task irrelevant feature dimensions during conjunction viewing and target template maintenance prior to visual search. Subjects were cued to search for a spatial frequency (SF) or orientation of a Gabor grating and we measured BOLD signal during cue and delay periods before the onset of a search display. RSA of delay period activity revealed that widespread regions in frontal, posterior parietal, and occipitotemporal cortices showed general representational differences between task relevant and task irrelevant dimensions (e.g., orientation vs. SF). In contrast, RSA of cue period activity revealed sensory-related representational differences between cue images (regardless of task) at the occipital pole and additionally in the frontal pole. Our data show that task and sensory information are represented differently during viewing and during target template maintenance, and that task relevance modulates the representation of visual information across the cortex.
NASA Astrophysics Data System (ADS)
Saraceno, Marcos; Ermann, Leonardo; Cormick, Cecilia
2017-03-01
The problem of finding symmetric informationally complete positive-operator-valued-measures (SIC-POVMs) has been solved numerically for all dimensions d up to 67 [A. J. Scott and M. Grassl, J. Math. Phys. 51, 042203 (2010), 10.1063/1.3374022], but a general proof of existence is still lacking. For each dimension, it was shown that it is possible to find a SIC-POVM that is generated from a fiducial state upon application of the operators of the Heisenberg-Weyl group. We draw on the numerically determined fiducial states to study their phase-space features, as displayed by the characteristic function and the Wigner, Bargmann, and Husimi representations, adapted to a Hilbert space of finite dimension. We analyze the phase-space localization of fiducial states, and observe that the SIC-POVM condition is equivalent to a maximal delocalization property. Finally, we explore the consequences in phase space of the conjectured Zauner symmetry. In particular, we construct a Hermitian operator commuting with this symmetry that leads to a representation of fiducial states in terms of eigenfunctions with definite semiclassical features.
Parents' Verbal Communication and Childhood Anxiety: A Systematic Review.
Percy, Ray; Creswell, Cathy; Garner, Matt; O'Brien, Doireann; Murray, Lynne
2016-03-01
Parents' verbal communication to their child, particularly the expression of fear-relevant information (e.g., attributions of threat to the environment), is considered to play a key role in children's fears and anxiety. This review considers the extent to which parental verbal communication is associated with child anxiety by examining research that has employed objective observational methods. Using a systematic search strategy, we identified 15 studies that addressed this question. These studies provided some evidence that particular fear-relevant features of parental verbal communication are associated with child anxiety under certain conditions. However, the scope for drawing reliable, general conclusions was limited by extensive methodological variation between studies, particularly in terms of the features of parental verbal communication examined and the context in which communication took place, how child anxiety was measured, and inconsistent consideration of factors that may moderate the verbal communication-child anxiety relationship. We discuss ways in which future research can contribute to this developing evidence base and reduce further methodological inconsistency so as to inform interventions for children with anxiety problems.
[Analysis on clinical features and treatment of herpes zoster patients hospitalized in real world].
Yuan, Ling-Lian; Wang, Lian-Xin; Xie, Yan-Ming; Yang, Wei; Yang, Zhi-Xin; Zhuang, Yan; Zhang, Yun-Bi
2014-09-01
From the hospital information system (HIS) of 20 national grade III-A general hospitals, 2 960 cases of herpes zoster as the research object, analyzes the relations between the general information, syndrome of traditional Chinese medicine (TCM), western medicine combined diseases, the relationship between the solar term and the incidence of herpes zoster, and the combined use of Chinese and western medicine. Among the patients with 46-65 year old has the highest percentage of diseased; admission to general outpatient clinic is the most; the most common medical payment is medicare; combined disease such as hypertension, diabetes and coronary heart disease is more common; early treatment effect of herpes zoster is better than the sequelae; summer and autumn solar term patients is hospitalized more, TCM syndrome is damp heat of liver fire; about drugs, western medicine is the most commonly used vitamin B1 and mecobalamin, traditional Chinese medicine is the most frequently used Danhong injection, combination therapy with promoting blood circulation drugs and neurotrophic drugs. Thus, herpes zoster, more common in elderly patients, with no obvious relationship between solar term, should be early diagnosis and early treatment, often with combination of Chinese traditional and western medicine treatment.
On multivariate trace inequalities of Sutter, Berta, and Tomamichel
NASA Astrophysics Data System (ADS)
Lemm, Marius
2018-01-01
We consider a family of multivariate trace inequalities recently derived by Sutter, Berta, and Tomamichel. These inequalities generalize the Golden-Thompson inequality and Lieb's triple matrix inequality to an arbitrary number of matrices in a way that features complex matrix powers (i.e., certain unitaries). We show that their inequalities can be rewritten as an n-matrix generalization of Lieb's original triple matrix inequality. The complex matrix powers are replaced by resolvents and appropriate maximally entangled states. We expect that the technically advantageous properties of resolvents, in particular for perturbation theory, can be of use in applications of the n-matrix inequalities, e.g., for analyzing the performance of the rotated Petz recovery map in quantum information theory and for removing the unitaries altogether.
Safety and General Considerations for the Use of Antibodies in Infectious Diseases.
Hey, Adam Seidelin
2017-01-01
Monocolonal antibodies are valuable potential new tools for meeting unmet needs in treating infectious dieseases and to provide alternatives and supplements to antibiotics in these times of growing resistance. Especially when considering the ability to screen for antibodies reacting to very diverse target antigens and the ability to design and engineer them to work specifically to hit and overcome their strategies, like toxins and their hiding in specific cells to evade the immuneresponse and their special features enabling killing of the infectious agents and or the cells harbouring them. Antibodies are generally very safe and adverse effects of treatments with therapeutic antibodies are usually related to exaggeration of the intended pharmacology. In this chapter general safety considerations for the use of antibodies is reviewed and the general procedures for nonclinical testing to support their clinical development. Special considerations for anti-infective mAb treatments are provided including the special features that makes nonclinical safety programs for anti-infective mAbs much more simple and restricted. However at a cost since only limited information for clinical safety and modeling can be derived from such programs. Then strategies for optimally designing antibodies are discussed including the use of combination of antibodies. Finally ways to facilitate development of more than the currently only three approved mAb based treatments are discussed with a special focus on high costs and high price and how collaboration and new strategies for development in emerging markets can be a driver for this.
Information processing in dendrites I. Input pattern generalisation.
Gurney, K N
2001-10-01
In this paper and its companion, we address the question as to whether there are any general principles underlying information processing in the dendritic trees of biological neurons. In order to address this question, we make two assumptions. First, the key architectural feature of dendrites responsible for many of their information processing abilities is the existence of independent sub-units performing local non-linear processing. Second, any general functional principles operate at a level of abstraction in which neurons are modelled by Boolean functions. To accommodate these assumptions, we therefore define a Boolean model neuron-the multi-cube unit (MCU)-which instantiates the notion of the discrete functional sub-unit. We then use this model unit to explore two aspects of neural functionality: generalisation (in this paper) and processing complexity (in its companion). Generalisation is dealt with from a geometric viewpoint and is quantified using a new metric-the set of order parameters. These parameters are computed for threshold logic units (TLUs), a class of random Boolean functions, and MCUs. Our interpretation of the order parameters is consistent with our knowledge of generalisation in TLUs and with the lack of generalisation in randomly chosen functions. Crucially, the order parameters for MCUs imply that these functions possess a range of generalisation behaviour. We argue that this supports the general thesis that dendrites facilitate input pattern generalisation despite any local non-linear processing within functionally isolated sub-units.
Reducing Sweeping Frequencies in Microwave NDT Employing Machine Learning Feature Selection
Moomen, Abdelniser; Ali, Abdulbaset; Ramahi, Omar M.
2016-01-01
Nondestructive Testing (NDT) assessment of materials’ health condition is useful for classifying healthy from unhealthy structures or detecting flaws in metallic or dielectric structures. Performing structural health testing for coated/uncoated metallic or dielectric materials with the same testing equipment requires a testing method that can work on metallics and dielectrics such as microwave testing. Reducing complexity and expenses associated with current diagnostic practices of microwave NDT of structural health requires an effective and intelligent approach based on feature selection and classification techniques of machine learning. Current microwave NDT methods in general based on measuring variation in the S-matrix over the entire operating frequency ranges of the sensors. For instance, assessing the health of metallic structures using a microwave sensor depends on the reflection or/and transmission coefficient measurements as a function of the sweeping frequencies of the operating band. The aim of this work is reducing sweeping frequencies using machine learning feature selection techniques. By treating sweeping frequencies as features, the number of top important features can be identified, then only the most influential features (frequencies) are considered when building the microwave NDT equipment. The proposed method of reducing sweeping frequencies was validated experimentally using a waveguide sensor and a metallic plate with different cracks. Among the investigated feature selection techniques are information gain, gain ratio, relief, chi-squared. The effectiveness of the selected features were validated through performance evaluations of various classification models; namely, Nearest Neighbor, Neural Networks, Random Forest, and Support Vector Machine. Results showed good crack classification accuracy rates after employing feature selection algorithms. PMID:27104533
Preserved conceptual implicit memory for pictures in patients with Alzheimer’s disease
Deason, Rebecca G.; Hussey, Erin P.; Flannery, Sean; Ally, Brandon A.
2015-01-01
The current study examined different aspects of conceptual implicit memory in patients with mild Alzheimer’s disease (AD). Specifically, we were interested in whether priming of distinctive conceptual features versus general semantic information related to pictures and words would differ for the mild AD patients and healthy older adults. In this study, 14 healthy older adults and 15 patients with mild AD studied both pictures and words followed by an implicit test section, where they were asked about distinctive conceptual or general semantic information related to the items they had previously studied (or novel items) Healthy older adults and patients with mild AD showed both conceptual priming and the picture superiority effect, but the AD patients only showed these effects for the questions focused on the distinctive conceptual information. We found that patients with mild AD showed intact conceptual picture priming in a task that required generating a response (answer) from a cue (question) for cues that focused on distinctive conceptual information. This experiment has helped improve our understanding of both the picture superiority effect and conceptual implicit memory in patients with mild AD in that these findings support the notion that conceptual implicit memory might potentially help to drive familiarity-based recognition in the face of impaired recollection in patients with mild AD. PMID:26291521
NASA Astrophysics Data System (ADS)
Yang, Qianli; Pitkow, Xaq
2015-03-01
Most interesting natural sensory stimuli are encoded in the brain in a form that can only be decoded nonlinearly. But despite being a core function of the brain, nonlinear population codes are rarely studied and poorly understood. Interestingly, the few existing models of nonlinear codes are inconsistent with known architectural features of the brain. In particular, these codes have information content that scales with the size of the cortical population, even if that violates the data processing inequality by exceeding the amount of information entering the sensory system. Here we provide a valid theory of nonlinear population codes by generalizing recent work on information-limiting correlations in linear population codes. Although these generalized, nonlinear information-limiting correlations bound the performance of any decoder, they also make decoding more robust to suboptimal computation, allowing many suboptimal decoders to achieve nearly the same efficiency as an optimal decoder. Although these correlations are extremely difficult to measure directly, particularly for nonlinear codes, we provide a simple, practical test by which one can use choice-related activity in small populations of neurons to determine whether decoding is suboptimal or optimal and limited by correlated noise. We conclude by describing an example computation in the vestibular system where this theory applies. QY and XP was supported by a grant from the McNair foundation.
Preserved conceptual implicit memory for pictures in patients with Alzheimer's disease.
Deason, Rebecca G; Hussey, Erin P; Flannery, Sean; Ally, Brandon A
2015-10-01
The current study examined different aspects of conceptual implicit memory in patients with mild Alzheimer's disease (AD). Specifically, we were interested in whether priming of distinctive conceptual features versus general semantic information related to pictures and words would differ for the mild AD patients and healthy older adults. In this study, 14 healthy older adults and 15 patients with mild AD studied both pictures and words followed by an implicit test section, where they were asked about distinctive conceptual or general semantic information related to the items they had previously studied (or novel items). Healthy older adults and patients with mild AD showed both conceptual priming and the picture superiority effect, but the AD patients only showed these effects for the questions focused on the distinctive conceptual information. We found that patients with mild AD showed intact conceptual picture priming in a task that required generating a response (answer) from a cue (question) for cues that focused on distinctive conceptual information. This experiment has helped improve our understanding of both the picture superiority effect and conceptual implicit memory in patients with mild AD in that these findings support the notion that conceptual implicit memory might potentially help to drive familiarity-based recognition in the face of impaired recollection in patients with mild AD. Copyright © 2015. Published by Elsevier Inc.
Feature saliency and feedback information interactively impact visual category learning
Hammer, Rubi; Sloutsky, Vladimir; Grill-Spector, Kalanit
2015-01-01
Visual category learning (VCL) involves detecting which features are most relevant for categorization. VCL relies on attentional learning, which enables effectively redirecting attention to object’s features most relevant for categorization, while ‘filtering out’ irrelevant features. When features relevant for categorization are not salient, VCL relies also on perceptual learning, which enables becoming more sensitive to subtle yet important differences between objects. Little is known about how attentional learning and perceptual learning interact when VCL relies on both processes at the same time. Here we tested this interaction. Participants performed VCL tasks in which they learned to categorize novel stimuli by detecting the feature dimension relevant for categorization. Tasks varied both in feature saliency (low-saliency tasks that required perceptual learning vs. high-saliency tasks), and in feedback information (tasks with mid-information, moderately ambiguous feedback that increased attentional load, vs. tasks with high-information non-ambiguous feedback). We found that mid-information and high-information feedback were similarly effective for VCL in high-saliency tasks. This suggests that an increased attentional load, associated with the processing of moderately ambiguous feedback, has little effect on VCL when features are salient. In low-saliency tasks, VCL relied on slower perceptual learning; but when the feedback was highly informative participants were able to ultimately attain the same performance as during the high-saliency VCL tasks. However, VCL was significantly compromised in the low-saliency mid-information feedback task. We suggest that such low-saliency mid-information learning scenarios are characterized by a ‘cognitive loop paradox’ where two interdependent learning processes have to take place simultaneously. PMID:25745404
Wallar, Lauren E; Sargeant, Jan M; McEwen, Scott A; Mercer, Nicola J; Papadopoulos, Andrew
Environmental public health practitioners rely on information technology (IT) to maintain and improve environmental health. However, current systems have limited capacity. A better understanding of the importance of IT features is needed to enhance data and information capacity. (1) Rank IT features according to the percentage of respondents who rated them as essential to an information management system and (2) quantify the relative importance of a subset of these features using best-worst scaling. Information technology features were initially identified from a previously published systematic review of software evaluation criteria and a list of software options from a private corporation specializing in inspection software. Duplicates and features unrelated to environmental public health were removed. The condensed list was refined by a working group of environmental public health management to a final list of 57 IT features. The essentialness of features was electronically rated by environmental public health managers. Features where 50% to 80% of respondents rated them as essential (n = 26) were subsequently evaluated using best-worst scaling. Ontario, Canada. Environmental public health professionals in local public health. Importance scores of IT features. The majority of IT features (47/57) were considered essential to an information management system by at least half of the respondents (n = 52). The highest-rated features were delivery to printer, software encryption capability, and software maintenance services. Of the 26 features evaluated in the best-worst scaling exercise, the most important features were orientation to all practice areas, off-line capability, and ability to view past inspection reports and results. The development of a single, unified environmental public health information management system that fulfills the reporting and functionality needs of system users is recommended. This system should be implemented by all public health units to support data and information capacity in local environmental public health. This study can be used to guide vendor evaluation, negotiation, and selection in local environmental public health, and provides an example of academia-practice partnerships and the use of best-worst scaling in public health research.
On the use of feature selection to improve the detection of sea oil spills in SAR images
NASA Astrophysics Data System (ADS)
Mera, David; Bolon-Canedo, Veronica; Cotos, J. M.; Alonso-Betanzos, Amparo
2017-03-01
Fast and effective oil spill detection systems are crucial to ensure a proper response to environmental emergencies caused by hydrocarbon pollution on the ocean's surface. Typically, these systems uncover not only oil spills, but also a high number of look-alikes. The feature extraction is a critical and computationally intensive phase where each detected dark spot is independently examined. Traditionally, detection systems use an arbitrary set of features to discriminate between oil spills and look-alikes phenomena. However, Feature Selection (FS) methods based on Machine Learning (ML) have proved to be very useful in real domains for enhancing the generalization capabilities of the classifiers, while discarding the existing irrelevant features. In this work, we present a generic and systematic approach, based on FS methods, for choosing a concise and relevant set of features to improve the oil spill detection systems. We have compared five FS methods: Correlation-based feature selection (CFS), Consistency-based filter, Information Gain, ReliefF and Recursive Feature Elimination for Support Vector Machine (SVM-RFE). They were applied on a 141-input vector composed of features from a collection of outstanding studies. Selected features were validated via a Support Vector Machine (SVM) classifier and the results were compared with previous works. Test experiments revealed that the classifier trained with the 6-input feature vector proposed by SVM-RFE achieved the best accuracy and Cohen's kappa coefficient (87.1% and 74.06% respectively). This is a smaller feature combination with similar or even better classification accuracy than previous works. The presented finding allows to speed up the feature extraction phase without reducing the classifier accuracy. Experiments also confirmed the significance of the geometrical features since 75.0% of the different features selected by the applied FS methods as well as 66.67% of the proposed 6-input feature vector belong to this category.
Office of university affairs management information system: Users guide and documentation
NASA Technical Reports Server (NTRS)
Distin, J.; Goodwin, D.; Greene, W. A.
1977-01-01
Data on the NASA-University relationship are reported that encompass research in over 600 schools through several thousand grants and contracts. This user-driven system is capable of producing a variety of cyclical and query-type reports describing the total NASA-University profile. The capabilities, designed as part of the system, require a minimum of user maintenance in order to ensure system efficiency and data validity to meet the recurrent Statutory and Executive Branch information requirements as well as ad hoc inquiries from NASA general management, Congress, other Federal agencies, private sector organizations, universities and individuals. The data base contains information on each university, the individual projects and the financial details, current and historic, on all contracts and grants. Complete details are given on the system from its unique design features to the actual steps required for daily operation.
Discriminative analysis of lip motion features for speaker identification and speech-reading.
Cetingül, H Ertan; Yemez, Yücel; Erzin, Engin; Tekalp, A Murat
2006-10-01
There have been several studies that jointly use audio, lip intensity, and lip geometry information for speaker identification and speech-reading applications. This paper proposes using explicit lip motion information, instead of or in addition to lip intensity and/or geometry information, for speaker identification and speech-reading within a unified feature selection and discrimination analysis framework, and addresses two important issues: 1) Is using explicit lip motion information useful, and, 2) if so, what are the best lip motion features for these two applications? The best lip motion features for speaker identification are considered to be those that result in the highest discrimination of individual speakers in a population, whereas for speech-reading, the best features are those providing the highest phoneme/word/phrase recognition rate. Several lip motion feature candidates have been considered including dense motion features within a bounding box about the lip, lip contour motion features, and combination of these with lip shape features. Furthermore, a novel two-stage, spatial, and temporal discrimination analysis is introduced to select the best lip motion features for speaker identification and speech-reading applications. Experimental results using an hidden-Markov-model-based recognition system indicate that using explicit lip motion information provides additional performance gains in both applications, and lip motion features prove more valuable in the case of speech-reading application.
Fine-grained, local maps and coarse, global representations support human spatial working memory.
Katshu, Mohammad Zia Ul Haq; d'Avossa, Giovanni
2014-01-01
While sensory processes are tuned to particular features, such as an object's specific location, color or orientation, visual working memory (vWM) is assumed to store information using representations, which generalize over a feature dimension. Additionally, current vWM models presume that different features or objects are stored independently. On the other hand, configurational effects, when observed, are supposed to mainly reflect encoding strategies. We show that the location of the target, relative to the display center and boundaries, and overall memory load influenced recall precision, indicating that, like sensory processes, capacity limited vWM resources are spatially tuned. When recalling one of three memory items the target distance from the display center was overestimated, similar to the error when only one item was memorized, but its distance from the memory items' average position was underestimated, showing that not only individual memory items' position, but also the global configuration of the memory array may be stored. Finally, presenting the non-target items at recall, consequently providing landmarks and configurational information, improved precision and accuracy of target recall. Similarly, when the non-target items were translated at recall, relative to their position in the initial display, a parallel displacement of the recalled target was observed. These findings suggest that fine-grained spatial information in vWM is represented in local maps whose resolution varies with distance from landmarks, such as the display center, while coarse representations are used to store the memory array configuration. Both these representations are updated at the time of recall.
Fine-Grained, Local Maps and Coarse, Global Representations Support Human Spatial Working Memory
Katshu, Mohammad Zia Ul Haq; d'Avossa, Giovanni
2014-01-01
While sensory processes are tuned to particular features, such as an object's specific location, color or orientation, visual working memory (vWM) is assumed to store information using representations, which generalize over a feature dimension. Additionally, current vWM models presume that different features or objects are stored independently. On the other hand, configurational effects, when observed, are supposed to mainly reflect encoding strategies. We show that the location of the target, relative to the display center and boundaries, and overall memory load influenced recall precision, indicating that, like sensory processes, capacity limited vWM resources are spatially tuned. When recalling one of three memory items the target distance from the display center was overestimated, similar to the error when only one item was memorized, but its distance from the memory items' average position was underestimated, showing that not only individual memory items' position, but also the global configuration of the memory array may be stored. Finally, presenting the non-target items at recall, consequently providing landmarks and configurational information, improved precision and accuracy of target recall. Similarly, when the non-target items were translated at recall, relative to their position in the initial display, a parallel displacement of the recalled target was observed. These findings suggest that fine-grained spatial information in vWM is represented in local maps whose resolution varies with distance from landmarks, such as the display center, while coarse representations are used to store the memory array configuration. Both these representations are updated at the time of recall. PMID:25259601
Feature Selection for Chemical Sensor Arrays Using Mutual Information
Wang, X. Rosalind; Lizier, Joseph T.; Nowotny, Thomas; Berna, Amalia Z.; Prokopenko, Mikhail; Trowell, Stephen C.
2014-01-01
We address the problem of feature selection for classifying a diverse set of chemicals using an array of metal oxide sensors. Our aim is to evaluate a filter approach to feature selection with reference to previous work, which used a wrapper approach on the same data set, and established best features and upper bounds on classification performance. We selected feature sets that exhibit the maximal mutual information with the identity of the chemicals. The selected features closely match those found to perform well in the previous study using a wrapper approach to conduct an exhaustive search of all permitted feature combinations. By comparing the classification performance of support vector machines (using features selected by mutual information) with the performance observed in the previous study, we found that while our approach does not always give the maximum possible classification performance, it always selects features that achieve classification performance approaching the optimum obtained by exhaustive search. We performed further classification using the selected feature set with some common classifiers and found that, for the selected features, Bayesian Networks gave the best performance. Finally, we compared the observed classification performances with the performance of classifiers using randomly selected features. We found that the selected features consistently outperformed randomly selected features for all tested classifiers. The mutual information filter approach is therefore a computationally efficient method for selecting near optimal features for chemical sensor arrays. PMID:24595058
Kitamura, Takayuki; Hoshimoto, Hiroyuki; Yamada, Yoshitsugu
2009-10-01
The computerized anesthesia-recording systems are expensive and the introduction of the systems takes time and requires huge effort. Generally speaking, the efficacy of the computerized anesthesia-recording systems on the anesthetic managements is focused on the ability to automatically input data from the monitors to the anesthetic records, and tends to be underestimated. However, once the computerized anesthesia-recording systems are integrated into the medical information network, several features, which definitely contribute to improve the quality of the anesthetic management, can be developed; for example, to prevent misidentification of patients, to prevent mistakes related to blood transfusion, and to protect patients' personal information. Here we describe our experiences of the introduction of the computerized anesthesia-recording systems and the construction of the comprehensive medical information network for patients undergoing surgery in The University of Tokyo Hospital. We also discuss possible efficacy of the comprehensive medical information network for patients during surgery under anesthetic managements.
Branching dynamics of viral information spreading.
Iribarren, José Luis; Moro, Esteban
2011-10-01
Despite its importance for rumors or innovations propagation, peer-to-peer collaboration, social networking, or marketing, the dynamics of information spreading is not well understood. Since the diffusion depends on the heterogeneous patterns of human behavior and is driven by the participants' decisions, its propagation dynamics shows surprising properties not explained by traditional epidemic or contagion models. Here we present a detailed analysis of our study of real viral marketing campaigns where tracking the propagation of a controlled message allowed us to analyze the structure and dynamics of a diffusion graph involving over 31,000 individuals. We found that information spreading displays a non-Markovian branching dynamics that can be modeled by a two-step Bellman-Harris branching process that generalizes the static models known in the literature and incorporates the high variability of human behavior. It explains accurately all the features of information propagation under the "tipping point" and can be used for prediction and management of viral information spreading processes.
Branching dynamics of viral information spreading
NASA Astrophysics Data System (ADS)
Iribarren, José Luis; Moro, Esteban
2011-10-01
Despite its importance for rumors or innovations propagation, peer-to-peer collaboration, social networking, or marketing, the dynamics of information spreading is not well understood. Since the diffusion depends on the heterogeneous patterns of human behavior and is driven by the participants’ decisions, its propagation dynamics shows surprising properties not explained by traditional epidemic or contagion models. Here we present a detailed analysis of our study of real viral marketing campaigns where tracking the propagation of a controlled message allowed us to analyze the structure and dynamics of a diffusion graph involving over 31 000 individuals. We found that information spreading displays a non-Markovian branching dynamics that can be modeled by a two-step Bellman-Harris branching process that generalizes the static models known in the literature and incorporates the high variability of human behavior. It explains accurately all the features of information propagation under the “tipping point” and can be used for prediction and management of viral information spreading processes.
Deterministic object tracking using Gaussian ringlet and directional edge features
NASA Astrophysics Data System (ADS)
Krieger, Evan W.; Sidike, Paheding; Aspiras, Theus; Asari, Vijayan K.
2017-10-01
Challenges currently existing for intensity-based histogram feature tracking methods in wide area motion imagery (WAMI) data include object structural information distortions, background variations, and object scale change. These issues are caused by different pavement or ground types and from changing the sensor or altitude. All of these challenges need to be overcome in order to have a robust object tracker, while attaining a computation time appropriate for real-time processing. To achieve this, we present a novel method, Directional Ringlet Intensity Feature Transform (DRIFT), which employs Kirsch kernel filtering for edge features and a ringlet feature mapping for rotational invariance. The method also includes an automatic scale change component to obtain accurate object boundaries and improvements for lowering computation times. We evaluated the DRIFT algorithm on two challenging WAMI datasets, namely Columbus Large Image Format (CLIF) and Large Area Image Recorder (LAIR), to evaluate its robustness and efficiency. Additional evaluations on general tracking video sequences are performed using the Visual Tracker Benchmark and Visual Object Tracking 2014 databases to demonstrate the algorithms ability with additional challenges in long complex sequences including scale change. Experimental results show that the proposed approach yields competitive results compared to state-of-the-art object tracking methods on the testing datasets.
Multi-Source Multi-Target Dictionary Learning for Prediction of Cognitive Decline
Zhang, Jie; Li, Qingyang; Caselli, Richard J.; Thompson, Paul M.; Ye, Jieping; Wang, Yalin
2017-01-01
Alzheimer’s Disease (AD) is the most common type of dementia. Identifying correct biomarkers may determine pre-symptomatic AD subjects and enable early intervention. Recently, Multi-task sparse feature learning has been successfully applied to many computer vision and biomedical informatics researches. It aims to improve the generalization performance by exploiting the shared features among different tasks. However, most of the existing algorithms are formulated as a supervised learning scheme. Its drawback is with either insufficient feature numbers or missing label information. To address these challenges, we formulate an unsupervised framework for multi-task sparse feature learning based on a novel dictionary learning algorithm. To solve the unsupervised learning problem, we propose a two-stage Multi-Source Multi-Target Dictionary Learning (MMDL) algorithm. In stage 1, we propose a multi-source dictionary learning method to utilize the common and individual sparse features in different time slots. In stage 2, supported by a rigorous theoretical analysis, we develop a multi-task learning method to solve the missing label problem. Empirical studies on an N = 3970 longitudinal brain image data set, which involves 2 sources and 5 targets, demonstrate the improved prediction accuracy and speed efficiency of MMDL in comparison with other state-of-the-art algorithms. PMID:28943731
Linear and Non-linear Information Flows In Rainfall Field
NASA Astrophysics Data System (ADS)
Molini, A.; La Barbera, P.; Lanza, L. G.
The rainfall process is the result of a complex framework of non-linear dynamical in- teractions between the different components of the atmosphere. It preserves the com- plexity and the intermittent features of the generating system in space and time as well as the strong dependence of these properties on the scale of observations. The understanding and quantification of how the non-linearity of the generating process comes to influence the single rain events constitute relevant research issues in the field of hydro-meteorology, especially in those applications where a timely and effective forecasting of heavy rain events is able to reduce the risk of failure. This work focuses on the characterization of the non-linear properties of the observed rain process and on the influence of these features on hydrological models. Among the goals of such a survey is the research of regular structures of the rainfall phenomenon and the study of the information flows within the rain field. The research focuses on three basic evo- lution directions for the system: in time, in space and between the different scales. In fact, the information flows that force the system to evolve represent in general a connection between the different locations in space, the different instants in time and, unless assuming the hypothesis of scale invariance is verified "a priori", the different characteristic scales. A first phase of the analysis is carried out by means of classic statistical methods, then a survey of the information flows within the field is devel- oped by means of techniques borrowed from the Information Theory, and finally an analysis of the rain signal in the time and frequency domains is performed, with par- ticular reference to its intermittent structure. The methods adopted in this last part of the work are both the classic techniques of statistical inference and a few procedures for the detection of non-linear and non-stationary features within the process starting from measured data.
Informative Feature Selection for Object Recognition via Sparse PCA
2011-04-07
constraint on images collected from low-power camera net- works instead of high-end photography is that establishing wide-baseline feature correspondence of...variable selection tool for selecting informative features in the object images captured from low-resolution cam- era sensor networks. Firstly, we...More examples can be found in Figure 4 later. 3. Identifying Informative Features Classical PCA is a well established tool for the analysis of high
Rotation-invariant image and video description with local binary pattern features.
Zhao, Guoying; Ahonen, Timo; Matas, Jiří; Pietikäinen, Matti
2012-04-01
In this paper, we propose a novel approach to compute rotation-invariant features from histograms of local noninvariant patterns. We apply this approach to both static and dynamic local binary pattern (LBP) descriptors. For static-texture description, we present LBP histogram Fourier (LBP-HF) features, and for dynamic-texture recognition, we present two rotation-invariant descriptors computed from the LBPs from three orthogonal planes (LBP-TOP) features in the spatiotemporal domain. LBP-HF is a novel rotation-invariant image descriptor computed from discrete Fourier transforms of LBP histograms. The approach can be also generalized to embed any uniform features into this framework, and combining the supplementary information, e.g., sign and magnitude components of the LBP, together can improve the description ability. Moreover, two variants of rotation-invariant descriptors are proposed to the LBP-TOP, which is an effective descriptor for dynamic-texture recognition, as shown by its recent success in different application problems, but it is not rotation invariant. In the experiments, it is shown that the LBP-HF and its extensions outperform noninvariant and earlier versions of the rotation-invariant LBP in the rotation-invariant texture classification. In experiments on two dynamic-texture databases with rotations or view variations, the proposed video features can effectively deal with rotation variations of dynamic textures (DTs). They also are robust with respect to changes in viewpoint, outperforming recent methods proposed for view-invariant recognition of DTs.
NASA Astrophysics Data System (ADS)
Jiang, Guo-Qian; Xie, Ping; Wang, Xiao; Chen, Meng; He, Qun
2017-11-01
The performance of traditional vibration based fault diagnosis methods greatly depends on those handcrafted features extracted using signal processing algorithms, which require significant amounts of domain knowledge and human labor, and do not generalize well to new diagnosis domains. Recently, unsupervised representation learning provides an alternative promising solution to feature extraction in traditional fault diagnosis due to its superior learning ability from unlabeled data. Given that vibration signals usually contain multiple temporal structures, this paper proposes a multiscale representation learning (MSRL) framework to learn useful features directly from raw vibration signals, with the aim to capture rich and complementary fault pattern information at different scales. In our proposed approach, a coarse-grained procedure is first employed to obtain multiple scale signals from an original vibration signal. Then, sparse filtering, a newly developed unsupervised learning algorithm, is applied to automatically learn useful features from each scale signal, respectively, and then the learned features at each scale to be concatenated one by one to obtain multiscale representations. Finally, the multiscale representations are fed into a supervised classifier to achieve diagnosis results. Our proposed approach is evaluated using two different case studies: motor bearing and wind turbine gearbox fault diagnosis. Experimental results show that the proposed MSRL approach can take full advantages of the availability of unlabeled data to learn discriminative features and achieved better performance with higher accuracy and stability compared to the traditional approaches.
Sandholzer, Maximilian; Rurik, Imre; Deutsch, Tobias; Frese, Thomas
2014-10-01
Undergraduate and postgraduate medical education in general practice is complex as a wide medical spectrum needs to be covered. Modern guidelines demand students to be able to recall immense amounts of information relating to the diagnosis and management of clinical problems. With the intent of making a medical textbook digitally available on student mobile devices, preferences of students and potential of the idea was aimed to be researched. A total estimation among fourth year medical students at the Leipzig Medical School was conducted in June 2013. Students were asked to answer a semi-structured self-designed questionnaire regarding their detailed smartphone and app usage as well as their attitude and expectations towards education and practice supporting apps. The response rate was 93.2% (n = 290/311). The majority (69.3%) were female students. The mean age was 24.5 years. Of the respondents, 64.2% owned a smartphone and 22.5% a tablet computer. A total of 32.4% were already using medical apps for the smartphone--mostly drug reference or disease diagnosis and management apps. Regarding their wishes, 68.7% would like or very like to see an app on general practice. The respective means of the most important desired features on a Likert scale reaching from 1 (not important) to 5 (very important) were 4.3 for drug reference information, 4.2 for guidelines for differential diagnosis, 3.9. for medical pictures libraries and 3.9 for physical examination videos. The willingness to pay for a profound app averages at 14.35 Euros (SD = 16.21). Concluding, students clearly demand for an app on general practice. Such an app should ideally be smartphone optimized. Aside of what is usually available in traditional textbooks, multimedia features such as videos on examining methods or a medical picture library are very important to students and may help to bridge the gap between text-based knowledge and practical application. Therefore, authors of medical textbooks need to be aware that the development of an app is no trivial technical translation as raised students expectations demand for multimedia and interactive features as well as comprehensive drug information. Further research should focus on developing concepts to bring together developers and university professionals as well as experienced medical specialists to enable the development of apps that satisfy the demands of undergraduate and postgraduate educational needs.
Liu, B; Meng, X; Wu, G; Huang, Y
2012-05-17
In this article, we aimed to study whether feature precedence existed in the cognitive processing of multifeature visual information in the human brain. In our experiment, we paid attention to two important visual features as follows: color and shape. In order to avoid the presence of semantic constraints between them and the resulting impact, pure color and simple geometric shape were chosen as the color feature and shape feature of visual stimulus, respectively. We adopted an "old/new" paradigm to study the cognitive processing of color feature, shape feature and the combination of color feature and shape feature, respectively. The experiment consisted of three tasks as follows: Color task, Shape task and Color-Shape task. The results showed that the feature-based pattern would be activated in the human brain in processing multifeature visual information without semantic association between features. Furthermore, shape feature was processed earlier than color feature, and the cognitive processing of color feature was more difficult than that of shape feature. Copyright © 2012 IBRO. Published by Elsevier Ltd. All rights reserved.
Application of 6D Building Information Model (6D BIM) for Business-storage Building in Slovenia
NASA Astrophysics Data System (ADS)
Pučko, Zoran; Vincek, Dražen; Štrukelj, Andrej; Šuman, Nataša
2017-10-01
The aim of this paper is to present an application of 6D building information modelling (6D BIM) on a real business-storage building in Slovenia. First, features of building maintenance in general are described according to the current Slovenian legislation, and also a general principle of BIM is given. After that, step-by-step activities for modelling 6D BIM are exposed, namely from Element list for maintenance, determination of their lifetime and service measures, cost analysing and time analysing to 6D BIM modelling. The presented 6D BIM model is designed in a unique way in which cost analysis is performed as 5D BIM model with linked data to use BIM Construction Project Management Software (Vico Office), integrated with 3D BIM model, whereas time analysis as 4D BIM model is carried out as non-linked data with the help of Excel (without connection to 3D BIM model). The paper is intended to serve as a guide to the building owners to prepare 6D BIM and to provide an insight into the relevant dynamic information about intervals and costs for execution of maintenance works in the whole building lifecycle.
An international standard for observation data
NASA Astrophysics Data System (ADS)
Cox, Simon
2010-05-01
A generic information model for observations and related features supports data exchange both within and between different scientific and technical communities. Observations and Measurements (O&M) formalizes a neutral terminology for observation data and metadata. It was based on a model developed for medical observations, and draws on experience from geology and mineral exploration, in-situ monitoring, remote sensing, intelligence, biodiversity studies, ocean observations and climate simulations. Hundreds of current deployments of Sensor Observation Services (SOS), covering multiple disciplines, provide validation of the O&M model. A W3C Incubator group on 'Semantic Sensor Networks' is now using O&M as one of the bases for development of a formal ontology for sensor networks. O&M defines the information describing observation acts and their results, including the following key terms: observation, result, observed-property, feature-of-interest, procedure, phenomenon-time, and result-time. The model separates of the (meta-)data associated with the observation procedure, the observed feature, and the observation event itself. Observation results may take various forms, including scalar quantities, categories, vectors, grids, or any data structure required to represent the value of some property of some observed feature. O&M follows the ISO/TC 211 General Feature Model so non-geometric properties must be associated with typed feature instances. This requires formalization of information that may be trivial when working within some earth-science sub-disciplines (e.g. temperature, pressure etc. are associated with the atmosphere or ocean, and not just a location) but is critical to cross-disciplinary applications. It also allows the same structure and terminology to be used for in-situ, ex-situ and remote sensing observations, as well as for simulations. For example: a stream level observation is an in-situ monitoring application where the feature-of-interest is a reach, the observed property is water-level, and the result is a time-series of heights; stream quality is usually determined by ex-situ observation where the feature-of-interest is a specimen that is recovered from the stream, the observed property is water-quality, and the result is a set of measures of various parameters, or an assessment derived from these; on the other hand, distribution of surface temperature of a water body is typically determined through remote-sensing, where at observation time the procedure is located distant from the feature-of-interest, and the result is an image or grid. Observations usually involve sampling of an ultimate feature-of-interest. In the environmental sciences common sampling strategies are used. Spatial sampling is classified primarily by topological dimension (point, curve, surface, volume) and is supported by standard processing and visualisation tools. Specimens are used for ex-situ processing in most disciplines. Sampling features are often part of complexes (e.g. specimens are sub-divided; specimens are retrieved from points along a transect; sections are taken across tracts), so relationships between instances must be recorded. And observational campaigns involve collections of sampling features. The sampling feature model is a core part of O&M, and application experience has shown that describing the relationships between sampling features and observations is generally critical to successful use of the model. O&M was developed through Open Geospatial Consortium (OGC) as part of the Sensor Web Enablement (SWE) initiative. Other SWE standards include SensorML, SOS, Sensor Planning Service (SPS). The OGC O&M standard (Version 1) had two parts: part 1 describes observation events, and part 2 provides a schema sampling features. A revised version of O&M (Version 2) is to be published in a single document as ISO 19156. O&M Version 1 included an XML encoding for data exchange, which is used as the payload for SOS responses. The new version will provide a UML model only. Since an XML encoding may be generated following a rule, such as that presented in ISO 19136 (GML 3.2), it is not included in the standard directly. O&M Version 2 thus supports multiple physical implementations and versions.
An anti-disturbing real time pose estimation method and system
NASA Astrophysics Data System (ADS)
Zhou, Jian; Zhang, Xiao-hu
2011-08-01
Pose estimation relating two-dimensional (2D) images to three-dimensional (3D) rigid object need some known features to track. In practice, there are many algorithms which perform this task in high accuracy, but all of these algorithms suffer from features lost. This paper investigated the pose estimation when numbers of known features or even all of them were invisible. Firstly, known features were tracked to calculate pose in the current and the next image. Secondly, some unknown but good features to track were automatically detected in the current and the next image. Thirdly, those unknown features which were on the rigid and could match each other in the two images were retained. Because of the motion characteristic of the rigid object, the 3D information of those unknown features on the rigid could be solved by the rigid object's pose at the two moment and their 2D information in the two images except only two case: the first one was that both camera and object have no relative motion and camera parameter such as focus length, principle point, and etc. have no change at the two moment; the second one was that there was no shared scene or no matched feature in the two image. Finally, because those unknown features at the first time were known now, pose estimation could go on in the followed images in spite of the missing of known features in the beginning by repeating the process mentioned above. The robustness of pose estimation by different features detection algorithms such as Kanade-Lucas-Tomasi (KLT) feature, Scale Invariant Feature Transform (SIFT) and Speed Up Robust Feature (SURF) were compared and the compact of the different relative motion between camera and the rigid object were discussed in this paper. Graphic Processing Unit (GPU) parallel computing was also used to extract and to match hundreds of features for real time pose estimation which was hard to work on Central Processing Unit (CPU). Compared with other pose estimation methods, this new method can estimate pose between camera and object when part even all known features are lost, and has a quick response time benefit from GPU parallel computing. The method present here can be used widely in vision-guide techniques to strengthen its intelligence and generalization, which can also play an important role in autonomous navigation and positioning, robots fields at unknown environment. The results of simulation and experiments demonstrate that proposed method could suppress noise effectively, extracted features robustly, and achieve the real time need. Theory analysis and experiment shows the method is reasonable and efficient.
Combined mining: discovering informative knowledge in complex data.
Cao, Longbing; Zhang, Huaifeng; Zhao, Yanchang; Luo, Dan; Zhang, Chengqi
2011-06-01
Enterprise data mining applications often involve complex data such as multiple large heterogeneous data sources, user preferences, and business impact. In such situations, a single method or one-step mining is often limited in discovering informative knowledge. It would also be very time and space consuming, if not impossible, to join relevant large data sources for mining patterns consisting of multiple aspects of information. It is crucial to develop effective approaches for mining patterns combining necessary information from multiple relevant business lines, catering for real business settings and decision-making actions rather than just providing a single line of patterns. The recent years have seen increasing efforts on mining more informative patterns, e.g., integrating frequent pattern mining with classifications to generate frequent pattern-based classifiers. Rather than presenting a specific algorithm, this paper builds on our existing works and proposes combined mining as a general approach to mining for informative patterns combining components from either multiple data sets or multiple features or by multiple methods on demand. We summarize general frameworks, paradigms, and basic processes for multifeature combined mining, multisource combined mining, and multimethod combined mining. Novel types of combined patterns, such as incremental cluster patterns, can result from such frameworks, which cannot be directly produced by the existing methods. A set of real-world case studies has been conducted to test the frameworks, with some of them briefed in this paper. They identify combined patterns for informing government debt prevention and improving government service objectives, which show the flexibility and instantiation capability of combined mining in discovering informative knowledge in complex data.
New geomorphic data on the active Taiwan orogen: A multisource approach
NASA Technical Reports Server (NTRS)
Deffontaines, B.; Lee, J.-C.; Angelier, J.; Carvalho, J.; Rudant, J.-P.
1994-01-01
A multisource and multiscale approach of Taiwan morphotectonics combines different complementary geomorphic analyses based on a new elevation model (DEM), side-looking airborne radar (SLAR), and satellite (SPOT) imagery, aerial photographs, and control from independent field data. This analysis enables us not only to present an integrated geomorphic description of the Taiwan orogen but also to highlight some new geodynamic aspects. Well-known, major geological structures such as the Longitudinal Valley, Lishan, Pingtung, and the Foothills fault zones are of course clearly recognized, but numerous, previously unrecognized structures appear distributed within different regions of Taiwan. For instance, transfer fault zones within the Western Foothills and the Central Range are identified based on analyses of lineaments and general morphology. In many cases, the existence of geomorphic features identified in general images is supported by the results of geological field analyses carried out independently. In turn, the field analyses of structures and mechanisms at some sites provide a key for interpreting similar geomorphic featues in other areas. Examples are the conjugate pattern of strike-slip faults within the Central Range and the oblique fold-and-thrust pattern of the Coastal Range. Furthermore, neotectonic and morphological analyses (drainage and erosional surfaces) has been combined in order to obtain a more comprehensive description and interpretation of neotectonic features in Taiwan, such as for the Longitudinal Valley Fault. Next, at a more general scale, numerical processing of digital elevation models, resulting in average topography, summit level or base level maps, allows identification of major features related to the dynamics of uplift and erosion and estimates of erosion balance. Finally, a preliminary morphotectonic sketch map of Taiwan, combining information from all the sources listed above, is presented.
Microscopic medical image classification framework via deep learning and shearlet transform.
Rezaeilouyeh, Hadi; Mollahosseini, Ali; Mahoor, Mohammad H
2016-10-01
Cancer is the second leading cause of death in US after cardiovascular disease. Image-based computer-aided diagnosis can assist physicians to efficiently diagnose cancers in early stages. Existing computer-aided algorithms use hand-crafted features such as wavelet coefficients, co-occurrence matrix features, and recently, histogram of shearlet coefficients for classification of cancerous tissues and cells in images. These hand-crafted features often lack generalizability since every cancerous tissue and cell has a specific texture, structure, and shape. An alternative approach is to use convolutional neural networks (CNNs) to learn the most appropriate feature abstractions directly from the data and handle the limitations of hand-crafted features. A framework for breast cancer detection and prostate Gleason grading using CNN trained on images along with the magnitude and phase of shearlet coefficients is presented. Particularly, we apply shearlet transform on images and extract the magnitude and phase of shearlet coefficients. Then we feed shearlet features along with the original images to our CNN consisting of multiple layers of convolution, max pooling, and fully connected layers. Our experiments show that using the magnitude and phase of shearlet coefficients as extra information to the network can improve the accuracy of detection and generalize better compared to the state-of-the-art methods that rely on hand-crafted features. This study expands the application of deep neural networks into the field of medical image analysis, which is a difficult domain considering the limited medical data available for such analysis.
A Common Neural Code for Perceived and Inferred Emotion
Saxe, Rebecca
2014-01-01
Although the emotions of other people can often be perceived from overt reactions (e.g., facial or vocal expressions), they can also be inferred from situational information in the absence of observable expressions. How does the human brain make use of these diverse forms of evidence to generate a common representation of a target's emotional state? In the present research, we identify neural patterns that correspond to emotions inferred from contextual information and find that these patterns generalize across different cues from which an emotion can be attributed. Specifically, we use functional neuroimaging to measure neural responses to dynamic facial expressions with positive and negative valence and to short animations in which the valence of a character's emotion could be identified only from the situation. Using multivoxel pattern analysis, we test for regions that contain information about the target's emotional state, identifying representations specific to a single stimulus type and representations that generalize across stimulus types. In regions of medial prefrontal cortex (MPFC), a classifier trained to discriminate emotional valence for one stimulus (e.g., animated situations) could successfully discriminate valence for the remaining stimulus (e.g., facial expressions), indicating a representation of valence that abstracts away from perceptual features and generalizes across different forms of evidence. Moreover, in a subregion of MPFC, this neural representation generalized to trials involving subjectively experienced emotional events, suggesting partial overlap in neural responses to attributed and experienced emotions. These data provide a step toward understanding how the brain transforms stimulus-bound inputs into abstract representations of emotion. PMID:25429141
A common neural code for perceived and inferred emotion.
Skerry, Amy E; Saxe, Rebecca
2014-11-26
Although the emotions of other people can often be perceived from overt reactions (e.g., facial or vocal expressions), they can also be inferred from situational information in the absence of observable expressions. How does the human brain make use of these diverse forms of evidence to generate a common representation of a target's emotional state? In the present research, we identify neural patterns that correspond to emotions inferred from contextual information and find that these patterns generalize across different cues from which an emotion can be attributed. Specifically, we use functional neuroimaging to measure neural responses to dynamic facial expressions with positive and negative valence and to short animations in which the valence of a character's emotion could be identified only from the situation. Using multivoxel pattern analysis, we test for regions that contain information about the target's emotional state, identifying representations specific to a single stimulus type and representations that generalize across stimulus types. In regions of medial prefrontal cortex (MPFC), a classifier trained to discriminate emotional valence for one stimulus (e.g., animated situations) could successfully discriminate valence for the remaining stimulus (e.g., facial expressions), indicating a representation of valence that abstracts away from perceptual features and generalizes across different forms of evidence. Moreover, in a subregion of MPFC, this neural representation generalized to trials involving subjectively experienced emotional events, suggesting partial overlap in neural responses to attributed and experienced emotions. These data provide a step toward understanding how the brain transforms stimulus-bound inputs into abstract representations of emotion. Copyright © 2014 the authors 0270-6474/14/3315997-12$15.00/0.
Automated identification of diagnosis and co-morbidity in clinical records.
Cano, C; Blanco, A; Peshkin, L
2009-01-01
Automated understanding of clinical records is a challenging task involving various legal and technical difficulties. Clinical free text is inherently redundant, unstructured, and full of acronyms, abbreviations and domain-specific language which make it challenging to mine automatically. There is much effort in the field focused on creating specialized ontology, lexicons and heuristics based on expert knowledge of the domain. However, ad-hoc solutions poorly generalize across diseases or diagnoses. This paper presents a successful approach for a rapid prototyping of a diagnosis classifier based on a popular computational linguistics platform. The corpus consists of several hundred of full length discharge summaries provided by Partners Healthcare. The goal is to identify a diagnosis and assign co-morbidi-ty. Our approach is based on the rapid implementation of a logistic regression classifier using an existing toolkit: LingPipe (http://alias-i.com/lingpipe). We implement and compare three different classifiers. The baseline approach uses character 5-grams as features. The second approach uses a bag-of-words representation enriched with a small additional set of features. The third approach reduces a feature set to the most informative features according to the information content. The proposed systems achieve high performance (average F-micro 0.92) for the task. We discuss the relative merit of the three classifiers. Supplementary material with detailed results is available at: http:// decsai.ugr.es/~ccano/LR/supplementary_ material/ We show that our methodology for rapid prototyping of a domain-unaware system is effective for building an accurate classifier for clinical records.
EO-1 analysis applicable to coastal characterization
NASA Astrophysics Data System (ADS)
Burke, Hsiao-hua K.; Misra, Bijoy; Hsu, Su May; Griffin, Michael K.; Upham, Carolyn; Farrar, Kris
2003-09-01
The EO-1 satellite is part of NASA's New Millennium Program (NMP). It consists of three imaging sensors: the multi-spectral Advanced Land Imager (ALI), Hyperion and Atmospheric Corrector. Hyperion provides a high-resolution hyperspectral imager capable of resolving 220 spectral bands (from 0.4 to 2.5 micron) with a 30 m resolution. The instrument images a 7.5 km by 100 km land area per image. Hyperion is currently the only space-borne HSI data source since the launch of EO-1 in late 2000. The discussion begins with the unique capability of hyperspectral sensing to coastal characterization: (1) most ocean feature algorithms are semi-empirical retrievals and HSI has all spectral bands to provide legacy with previous sensors and to explore new information, (2) coastal features are more complex than those of deep ocean that coupled effects are best resolved with HSI, and (3) with contiguous spectral coverage, atmospheric compensation can be done with more accuracy and confidence, especially since atmospheric aerosol effects are the most pronounced in the visible region where coastal feature lie. EO-1 data from Chesapeake Bay from 19 February 2002 are analyzed. In this presentation, it is first illustrated that hyperspectral data inherently provide more information for feature extraction than multispectral data despite Hyperion has lower SNR than ALI. Chlorophyll retrievals are also shown. The results compare favorably with data from other sources. The analysis illustrates the potential value of Hyperion (and HSI in general) data to coastal characterization. Future measurement requirements (air borne and space borne) are also discussed.
Effects of ensemble and summary displays on interpretations of geospatial uncertainty data.
Padilla, Lace M; Ruginski, Ian T; Creem-Regehr, Sarah H
2017-01-01
Ensemble and summary displays are two widely used methods to represent visual-spatial uncertainty; however, there is disagreement about which is the most effective technique to communicate uncertainty to the general public. Visualization scientists create ensemble displays by plotting multiple data points on the same Cartesian coordinate plane. Despite their use in scientific practice, it is more common in public presentations to use visualizations of summary displays, which scientists create by plotting statistical parameters of the ensemble members. While prior work has demonstrated that viewers make different decisions when viewing summary and ensemble displays, it is unclear what components of the displays lead to diverging judgments. This study aims to compare the salience of visual features - or visual elements that attract bottom-up attention - as one possible source of diverging judgments made with ensemble and summary displays in the context of hurricane track forecasts. We report that salient visual features of both ensemble and summary displays influence participant judgment. Specifically, we find that salient features of summary displays of geospatial uncertainty can be misunderstood as displaying size information. Further, salient features of ensemble displays evoke judgments that are indicative of accurate interpretations of the underlying probability distribution of the ensemble data. However, when participants use ensemble displays to make point-based judgments, they may overweight individual ensemble members in their decision-making process. We propose that ensemble displays are a promising alternative to summary displays in a geospatial context but that decisions about visualization methods should be informed by the viewer's task.
Extensions of algebraic image operators: An approach to model-based vision
NASA Technical Reports Server (NTRS)
Lerner, Bao-Ting; Morelli, Michael V.
1990-01-01
Researchers extend their previous research on a highly structured and compact algebraic representation of grey-level images which can be viewed as fuzzy sets. Addition and multiplication are defined for the set of all grey-level images, which can then be described as polynomials of two variables. Utilizing this new algebraic structure, researchers devised an innovative, efficient edge detection scheme. An accurate method for deriving gradient component information from this edge detector is presented. Based upon this new edge detection system researchers developed a robust method for linear feature extraction by combining the techniques of a Hough transform and a line follower. The major advantage of this feature extractor is its general, object-independent nature. Target attributes, such as line segment lengths, intersections, angles of intersection, and endpoints are derived by the feature extraction algorithm and employed during model matching. The algebraic operators are global operations which are easily reconfigured to operate on any size or shape region. This provides a natural platform from which to pursue dynamic scene analysis. A method for optimizing the linear feature extractor which capitalizes on the spatially reconfiguration nature of the edge detector/gradient component operator is discussed.
Guo, Hao; Qin, Mengna; Chen, Junjie; Xu, Yong; Xiang, Jie
2017-01-01
High-order functional connectivity networks are rich in time information that can reflect dynamic changes in functional connectivity between brain regions. Accordingly, such networks are widely used to classify brain diseases. However, traditional methods for processing high-order functional connectivity networks generally include the clustering method, which reduces data dimensionality. As a result, such networks cannot be effectively interpreted in the context of neurology. Additionally, due to the large scale of high-order functional connectivity networks, it can be computationally very expensive to use complex network or graph theory to calculate certain topological properties. Here, we propose a novel method of generating a high-order minimum spanning tree functional connectivity network. This method increases the neurological significance of the high-order functional connectivity network, reduces network computing consumption, and produces a network scale that is conducive to subsequent network analysis. To ensure the quality of the topological information in the network structure, we used frequent subgraph mining technology to capture the discriminative subnetworks as features and combined this with quantifiable local network features. Then we applied a multikernel learning technique to the corresponding selected features to obtain the final classification results. We evaluated our proposed method using a data set containing 38 patients with major depressive disorder and 28 healthy controls. The experimental results showed a classification accuracy of up to 97.54%.
Qin, Mengna; Chen, Junjie; Xu, Yong; Xiang, Jie
2017-01-01
High-order functional connectivity networks are rich in time information that can reflect dynamic changes in functional connectivity between brain regions. Accordingly, such networks are widely used to classify brain diseases. However, traditional methods for processing high-order functional connectivity networks generally include the clustering method, which reduces data dimensionality. As a result, such networks cannot be effectively interpreted in the context of neurology. Additionally, due to the large scale of high-order functional connectivity networks, it can be computationally very expensive to use complex network or graph theory to calculate certain topological properties. Here, we propose a novel method of generating a high-order minimum spanning tree functional connectivity network. This method increases the neurological significance of the high-order functional connectivity network, reduces network computing consumption, and produces a network scale that is conducive to subsequent network analysis. To ensure the quality of the topological information in the network structure, we used frequent subgraph mining technology to capture the discriminative subnetworks as features and combined this with quantifiable local network features. Then we applied a multikernel learning technique to the corresponding selected features to obtain the final classification results. We evaluated our proposed method using a data set containing 38 patients with major depressive disorder and 28 healthy controls. The experimental results showed a classification accuracy of up to 97.54%. PMID:29387141
Jian, Wenjuan; Chen, Minyou; McFarland, Dennis J
2017-04-01
Phase-locking value (PLV) is a well-known feature in sensorimotor rhythm (SMR) based BCI. Zero-phase PLV has not been explored because it is generally regarded as the result of volume conduction. Because spatial filters are often used to enhance the amplitude (square root of band power (BP)) feature and attenuate volume conduction, they are frequently applied as pre-processing methods when computing PLV. However, the effects of spatial filtering on PLV are ambiguous. Therefore, this article aims to explore whether zero-phase PLV is meaningful and how this is influenced by spatial filtering. Based on archival EEG data of left and right hand movement tasks for 32 subjects, we compared BP and PLV feature using data with and without pre-processing by a large Laplacian. Results showed that using ear-referenced data, zero-phase PLV provided unique information independent of BP for task prediction which was not explained by volume conduction and was significantly decreased when a large Laplacian was applied. In other words, the large Laplacian eliminated the useful information in zero-phase PLV for task prediction suggesting that it contains effects of both amplitude and phase. Therefore, zero-phase PLV may have functional significance beyond volume conduction. The interpretation of spatial filtering may be complicated by effects of phase. Copyright © 2017 Elsevier Inc. All rights reserved.
A Query Expansion Framework in Image Retrieval Domain Based on Local and Global Analysis
Rahman, M. M.; Antani, S. K.; Thoma, G. R.
2011-01-01
We present an image retrieval framework based on automatic query expansion in a concept feature space by generalizing the vector space model of information retrieval. In this framework, images are represented by vectors of weighted concepts similar to the keyword-based representation used in text retrieval. To generate the concept vocabularies, a statistical model is built by utilizing Support Vector Machine (SVM)-based classification techniques. The images are represented as “bag of concepts” that comprise perceptually and/or semantically distinguishable color and texture patches from local image regions in a multi-dimensional feature space. To explore the correlation between the concepts and overcome the assumption of feature independence in this model, we propose query expansion techniques in the image domain from a new perspective based on both local and global analysis. For the local analysis, the correlations between the concepts based on the co-occurrence pattern, and the metrical constraints based on the neighborhood proximity between the concepts in encoded images, are analyzed by considering local feedback information. We also analyze the concept similarities in the collection as a whole in the form of a similarity thesaurus and propose an efficient query expansion based on the global analysis. The experimental results on a photographic collection of natural scenes and a biomedical database of different imaging modalities demonstrate the effectiveness of the proposed framework in terms of precision and recall. PMID:21822350
Generalized uncertainty principle: implications for black hole complementarity
NASA Astrophysics Data System (ADS)
Chen, Pisin; Ong, Yen Chin; Yeom, Dong-han
2014-12-01
At the heart of the black hole information loss paradox and the firewall controversy lies the conflict between quantum mechanics and general relativity. Much has been said about quantum corrections to general relativity, but much less in the opposite direction. It is therefore crucial to examine possible corrections to quantum mechanics due to gravity. Indeed, the Heisenberg Uncertainty Principle is one profound feature of quantum mechanics, which nevertheless may receive correction when gravitational effects become important. Such generalized uncertainty principle [GUP] has been motivated from not only quite general considerations of quantum mechanics and gravity, but also string theoretic arguments. We examine the role of GUP in the context of black hole complementarity. We find that while complementarity can be violated by large N rescaling if one assumes only the Heisenberg's Uncertainty Principle, the application of GUP may save complementarity, but only if certain N -dependence is also assumed. This raises two important questions beyond the scope of this work, i.e., whether GUP really has the proposed form of N -dependence, and whether black hole complementarity is indeed correct.
NASA Astrophysics Data System (ADS)
Gevaert, C. M.; Persello, C.; Sliuzas, R.; Vosselman, G.
2016-06-01
Unmanned Aerial Vehicles (UAVs) are capable of providing very high resolution and up-to-date information to support informal settlement upgrading projects. In order to provide accurate basemaps, urban scene understanding through the identification and classification of buildings and terrain is imperative. However, common characteristics of informal settlements such as small, irregular buildings with heterogeneous roof material and large presence of clutter challenge state-of-the-art algorithms. Especially the dense buildings and steeply sloped terrain cause difficulties in identifying elevated objects. This work investigates how 2D radiometric and textural features, 2.5D topographic features, and 3D geometric features obtained from UAV imagery can be integrated to obtain a high classification accuracy in challenging classification problems for the analysis of informal settlements. It compares the utility of pixel-based and segment-based features obtained from an orthomosaic and DSM with point-based and segment-based features extracted from the point cloud to classify an unplanned settlement in Kigali, Rwanda. Findings show that the integration of 2D and 3D features leads to higher classification accuracies.
Hayhoe, Mary M; Matthis, Jonathan Samir
2018-08-06
The development of better eye and body tracking systems, and more flexible virtual environments have allowed more systematic exploration of natural vision and contributed a number of insights. In natural visually guided behaviour, humans make continuous sequences of sensory-motor decisions to satisfy current goals, and the role of vision is to provide the relevant information in order to achieve those goals. This paper reviews the factors that control gaze in natural visually guided actions such as locomotion, including the rewards and costs associated with the immediate behavioural goals, uncertainty about the state of the world and prior knowledge of the environment. These general features of human gaze control may inform the development of artificial systems.
MINIS: Multipurpose Interactive NASA Information System
NASA Technical Reports Server (NTRS)
1976-01-01
The Multipurpose Interactive NASA Information Systems (MINIS) was developed in response to the need for a data management system capable of operation on several different minicomputer systems. The desired system had to be capable of performing the functions of a LANDSAT photo descriptive data retrieval system while remaining general in terms of other acceptable user definable data bases. The system also had to be capable of performing data base updates and providing user-formatted output reports. The resultant MINI System provides all of these capabilities and several other features to complement the data management system. The MINI System is currently implemented on two minicomputer systems and is in the process of being installed on another minicomputer system. The MINIS is operational on four different data bases.
C++, objected-oriented programming, and astronomical data models
NASA Technical Reports Server (NTRS)
Farris, A.
1992-01-01
Contemporary astronomy is characterized by increasingly complex instruments and observational techniques, higher data collection rates, and large data archives, placing severe stress on software analysis systems. The object-oriented paradigm represents a significant new approach to software design and implementation that holds great promise for dealing with this increased complexity. The basic concepts of this approach will be characterized in contrast to more traditional procedure-oriented approaches. The fundamental features of objected-oriented programming will be discussed from a C++ programming language perspective, using examples familiar to astronomers. This discussion will focus on objects, classes and their relevance to the data type system; the principle of information hiding; and the use of inheritance to implement generalization/specialization relationships. Drawing on the object-oriented approach, features of a new database model to support astronomical data analysis will be presented.
Online Communities: The Case of Immigrants in Greece
NASA Astrophysics Data System (ADS)
Panaretou, Ioannis; Karousos, Nikos; Kostopoulos, Ioannis; Foteinou, Georgia-Barbara; Pavlidis, Giorgos
Immigrants in Greece are an increasing population, very often threatened by poverty and social exclusion. At the same time Greek government has no formal policy concerning their assimilation in Greek society and this situation generates multiple problems in both immigrants and native population. In this work we suggest that new technology can alleviate these effects and we present specific tools and methodologies adopted by ANCE, in order to support online communities and specifically immigrant communities in Greece. This approach has the potential to support immigrant communities' in terms of the organization of personal data, communication, and provision of a working space for dedicated use. The Information System's operational features are also presented, along with other characteristics and state-of-the-art features in order to propose a general direction to the design of online communities' mechanisms.
Natural scene logo recognition by joint boosting feature selection in salient regions
NASA Astrophysics Data System (ADS)
Fan, Wei; Sun, Jun; Naoi, Satoshi; Minagawa, Akihiro; Hotta, Yoshinobu
2011-01-01
Logos are considered valuable intellectual properties and a key component of the goodwill of a business. In this paper, we propose a natural scene logo recognition method which is segmentation-free and capable of processing images extremely rapidly and achieving high recognition rates. The classifiers for each logo are trained jointly, rather than independently. In this way, common features can be shared across multiple classes for better generalization. To deal with large range of aspect ratio of different logos, a set of salient regions of interest (ROI) are extracted to describe each class. We ensure the selected ROIs to be both individually informative and two-by-two weakly dependant by a Class Conditional Entropy Maximization criteria. Experimental results on a large logo database demonstrate the effectiveness and efficiency of our proposed method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yin, George; Wang, Le Yi; Zhang, Hongwei
2014-12-10
Stochastic approximation methods have found extensive and diversified applications. Recent emergence of networked systems and cyber-physical systems has generated renewed interest in advancing stochastic approximation into a general framework to support algorithm development for information processing and decisions in such systems. This paper presents a survey on some recent developments in stochastic approximation methods and their applications. Using connected vehicles in platoon formation and coordination as a platform, we highlight some traditional and new methodologies of stochastic approximation algorithms and explain how they can be used to capture essential features in networked systems. Distinct features of networked systems with randomlymore » switching topologies, dynamically evolving parameters, and unknown delays are presented, and control strategies are provided.« less
Informal settlement classification using point-cloud and image-based features from UAV data
NASA Astrophysics Data System (ADS)
Gevaert, C. M.; Persello, C.; Sliuzas, R.; Vosselman, G.
2017-03-01
Unmanned Aerial Vehicles (UAVs) are capable of providing very high resolution and up-to-date information to support informal settlement upgrading projects. In order to provide accurate basemaps, urban scene understanding through the identification and classification of buildings and terrain is imperative. However, common characteristics of informal settlements such as small, irregular buildings with heterogeneous roof material and large presence of clutter challenge state-of-the-art algorithms. Furthermore, it is of interest to analyse which fundamental attributes are suitable for describing these objects in different geographic locations. This work investigates how 2D radiometric and textural features, 2.5D topographic features, and 3D geometric features obtained from UAV imagery can be integrated to obtain a high classification accuracy in challenging classification problems for the analysis of informal settlements. UAV datasets from informal settlements in two different countries are compared in order to identify salient features for specific objects in heterogeneous urban environments. Findings show that the integration of 2D and 3D features leads to an overall accuracy of 91.6% and 95.2% respectively for informal settlements in Kigali, Rwanda and Maldonado, Uruguay.
NASA Astrophysics Data System (ADS)
Shi, Bibo; Grimm, Lars J.; Mazurowski, Maciej A.; Marks, Jeffrey R.; King, Lorraine M.; Maley, Carlo C.; Hwang, E. Shelley; Lo, Joseph Y.
2017-03-01
Reducing the overdiagnosis and overtreatment associated with ductal carcinoma in situ (DCIS) requires accurate prediction of the invasive potential at cancer screening. In this work, we investigated the utility of pre-operative histologic and mammographic features to predict upstaging of DCIS. The goal was to provide intentionally conservative baseline performance using readily available data from radiologists and pathologists and only linear models. We conducted a retrospective analysis on 99 patients with DCIS. Of those 25 were upstaged to invasive cancer at the time of definitive surgery. Pre-operative factors including both the histologic features extracted from stereotactic core needle biopsy (SCNB) reports and the mammographic features annotated by an expert breast radiologist were investigated with statistical analysis. Furthermore, we built classification models based on those features in an attempt to predict the presence of an occult invasive component in DCIS, with generalization performance assessed by receiver operating characteristic (ROC) curve analysis. Histologic features including nuclear grade and DCIS subtype did not show statistically significant differences between cases with pure DCIS and with DCIS plus invasive disease. However, three mammographic features, i.e., the major axis length of DCIS lesion, the BI-RADS level of suspicion, and radiologist's assessment did achieve the statistical significance. Using those three statistically significant features as input, a linear discriminant model was able to distinguish patients with DCIS plus invasive disease from those with pure DCIS, with AUC-ROC equal to 0.62. Overall, mammograms used for breast screening contain useful information that can be perceived by radiologists and help predict occult invasive components in DCIS.
ERIC Educational Resources Information Center
Cobos, Pedro L.; Gutiérrez-Cobo, María J.; Morís, Joaquín; Luque, David
2017-01-01
In our study, we tested the hypothesis that feature-based and rule-based generalization involve different types of processes that may affect each other producing different results depending on time constraints and on how generalization is measured. For this purpose, participants in our experiments learned cue-outcome relationships that followed…
Dawn Mission E/PO Use of NASA Archived Images
NASA Astrophysics Data System (ADS)
Wise, J.
2004-12-01
The Dawn Mission is a mission in time to the very origins of the solar system. We will orbit both Vesta and Ceres for extended periods of time, collecting data that we hope will answer fundamental questions about the formation of planet earth and the solar system in general. Because of the length of this mission, our EPO plan has a unique opportunity to involve students, teachers, parents, and the general public in the anticipation and excitement of the cruise, arrival, and exploration of these asteroids. This presentation focuses on the Clickworkers activity of the Dawn EPO because of its extensive repurposing of NASA images as EPO resources. Clickworkers was designed by Bob Kanefsky at NASA AMES. Currently, it engages the public in counting and classifying craters using NASA images of Mars. The Dawn mission is developing and extending the curricular material within the existing Clickworkers activity as well as adding images of Eros and of course eventually, Vesta and Ceres. Our plan is to use the Clickworkers activity and accompanying curricular material to inform and educate the general public in preparation for the first images from Vesta and then Ceres. For example, what can be learned from counting and classifying craters. We are also informing people of the scientific process by using images from several of NASA's missions to demonstrate the accumulation of facts and information that is the process of science. We will present and discuss our difficulties: . First of which is preparing appropriate information about cratering for people. Scientists have developed an understanding of crater counting, classification, and analysis over years of study and research. How do we scaffold enough information to make the activity meaningful and a learning experience for our clients. . Another difficulty is communicating key concepts in terms that are accessible to space science neophytes. The scaffolding may be correct, but not in terms that the general public can relate to. It is important that people do not go away from this activity with misconceptions. . Where does the program reside? Managing input from 1000's of participants. Keeping it meaningful for participants. Eros pictures had not been "resized" to square the pixels. . Data access needs are very simple, but proved to be difficult. The Clickworkers software only needs a URL for each image to be accessed. That sounds easy in principle, but getting the URL's and the images into the correct format has taken over a year. And our expected outcomes: . We hope to demonstrate that people using clickworkers gain an appreciation for what surface features can tell us about planetary objects. . We hope to show that people increase their anticipation for Dawn's arrival at Vesta. . We hope to increase people's expectation of Dawn's arrival by informing them of theories that will be tested by studying these asteroid's surface features. . We hope to improve people's understanding of how scientific findings build to produce theories and "scientific fact."
Heath, Richard C.; Conover, Clyde Stuart
1981-01-01
This first edition is a ready reference source of information on various facts and features about water in Florida. It is aimed primarily to help bust politicians, writers, agency officials, water managers, planners, consultants, educators, hydrologists, engineers, scientists, and the general public answer questions that arise on comparative and statistical aspects on the hydrology of Florida. It contains statistical comparative data, much of which was especially prepared for the almanac, a glossary of technical terms, tabular material, and conversion factors. Also included is a selective bibliography of 174 reports on water in Florida. (USGS)
Music Structure Analysis from Acoustic Signals
NASA Astrophysics Data System (ADS)
Dannenberg, Roger B.; Goto, Masataka
Music is full of structure, including sections, sequences of distinct musical textures, and the repetition of phrases or entire sections. The analysis of music audio relies upon feature vectors that convey information about music texture or pitch content. Texture generally refers to the average spectral shape and statistical fluctuation, often reflecting the set of sounding instruments, e.g., strings, vocal, or drums. Pitch content reflects melody and harmony, which is often independent of texture. Structure is found in several ways. Segment boundaries can be detected by observing marked changes in locally averaged texture.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hatta, Yoshitaka; Xiao, Bo-Wen; Yuan, Feng
We present a full evaluation of the deeply virtual Compton scattering cross section in the dipole framework in the small-x region. The result features the cosφ and cos2φ azimuthal angular correlations, which have been missing in previous studies based on the dipole model. In particular, the cos2φ term is generated by the elliptic gluon Wigner distribution of which the measurement at the planned electron-ion collider provides important information about the gluon tomography at small x. Here, we also show the consistency with the standard collinear factorization approach based on the quark and gluon generalized parton distributions.
Gain-Compensating Circuit For NDE and Ultrasonics
NASA Technical Reports Server (NTRS)
Kushnick, Peter W.
1987-01-01
High-frequency gain-compensating circuit designed for general use in nondestructive evaluation and ultrasonic measurements. Controls gain of ultrasonic receiver as function of time to aid in measuring attenuation of samples with high losses; for example, human skin and graphite/epoxy composites. Features high signal-to-noise ratio, large signal bandwidth and large dynamic range. Control bandwidth of 5 MHz ensures accuracy of control signal. Currently being used for retrieval of more information from ultrasonic signals sent through composite materials that have high losses, and to measure skin-burn depth in humans.
A survey of Applied Psychological Services' models of the human operator
NASA Technical Reports Server (NTRS)
Siegel, A. I.; Wolf, J. J.
1979-01-01
A historical perspective is presented in terms of the major features and status of two families of computer simulation models in which the human operator plays the primary role. Both task oriented and message oriented models are included. Two other recent efforts are summarized which deal with visual information processing. They involve not whole model development but a family of subroutines customized to add the human aspects to existing models. A global diagram of the generalized model development/validation process is presented and related to 15 criteria for model evaluation.
Feature extraction with deep neural networks by a generalized discriminant analysis.
Stuhlsatz, André; Lippel, Jens; Zielke, Thomas
2012-04-01
We present an approach to feature extraction that is a generalization of the classical linear discriminant analysis (LDA) on the basis of deep neural networks (DNNs). As for LDA, discriminative features generated from independent Gaussian class conditionals are assumed. This modeling has the advantages that the intrinsic dimensionality of the feature space is bounded by the number of classes and that the optimal discriminant function is linear. Unfortunately, linear transformations are insufficient to extract optimal discriminative features from arbitrarily distributed raw measurements. The generalized discriminant analysis (GerDA) proposed in this paper uses nonlinear transformations that are learnt by DNNs in a semisupervised fashion. We show that the feature extraction based on our approach displays excellent performance on real-world recognition and detection tasks, such as handwritten digit recognition and face detection. In a series of experiments, we evaluate GerDA features with respect to dimensionality reduction, visualization, classification, and detection. Moreover, we show that GerDA DNNs can preprocess truly high-dimensional input data to low-dimensional representations that facilitate accurate predictions even if simple linear predictors or measures of similarity are used.
Contributions to Pursuit-Evasion Game Theory
NASA Astrophysics Data System (ADS)
Oyler, Dave Wilson
This dissertation studies adversarial conflicts among a group of agents moving in the plane, possibly among obstacles, where some agents are pursuers and others are evaders. The goal of the pursuers is to capture the evaders, where capture requires a pursuer to be either co-located with an evader, or in close proximity. The goal of the evaders is to avoid capture. These scenarios, where different groups compete to accomplish conflicting goals, are referred to as pursuit-evasion games, and the agents are called players. Games featuring one pursuer and one evader are analyzed using dominance, where a point in the plane is said to be dominated by a player if that player is able to reach the point before the opposing players, regardless of the opposing players' actions. Two generalizations of the Apollonius circle are provided. One solves games with environments containing obstacles, and the other provides an alternative solution method for the Homicidal Chauffeur game. Optimal pursuit and evasion strategies based on dominance are provided. One benefit of dominance analysis is that it extends to games with many players. Two foundational games are studied; one features multiple pursuers against a single evader, and the other features a single pursuer against multiple evaders. Both are solved using dominance through a reduction to single pursuer, single evader games. Another game featuring competing teams of pursuers is introduced, where an evader cooperates with friendly pursuers to rendezvous before being captured by adversaries. Next, the assumption of complete and perfect information is relaxed, and uncertainties in player speeds, player positions, obstacle locations, and cost functions are studied. The sensitivity of the dominance boundary to perturbations in parameters is provided, and probabilistic dominance is introduced. The effect of information is studied by comparing solutions of games with perfect information to games with uncertainty. Finally, a pursuit law is developed that requires minimal information and highlights a limitation of dominance regions. These contributions extend pursuit-evasion game theory to a number of games that have not previously been solved, and in some cases, the solutions presented are more amenable to implementation than previous methods.
Database on unstable rock slopes in Norway
NASA Astrophysics Data System (ADS)
Oppikofer, Thierry; Nordahl, Bo; Bunkholt, Halvor; Nicolaisen, Magnus; Hermanns, Reginald L.; Böhme, Martina; Yugsi Molina, Freddy X.
2014-05-01
Several large rockslides have occurred in historic times in Norway causing many casualties. Most of these casualties are due to displacement waves triggered by a rock avalanche and affecting coast lines of entire lakes and fjords. The Geological Survey of Norway performs systematic mapping of unstable rock slopes in Norway and has detected up to now more than 230 unstable slopes with significant postglacial deformation. This systematic mapping aims to detect future rock avalanches before they occur. The registered unstable rock slopes are stored in a database on unstable rock slopes developed and maintained by the Geological Survey of Norway. The main aims of this database are (1) to serve as a national archive for unstable rock slopes in Norway; (2) to serve for data collection and storage during field mapping; (3) to provide decision-makers with hazard zones and other necessary information on unstable rock slopes for land-use planning and mitigation; and (4) to inform the public through an online map service. The database is organized hierarchically with a main point for each unstable rock slope to which several feature classes and tables are linked. This main point feature class includes several general attributes of the unstable rock slopes, such as site name, general and geological descriptions, executed works, recommendations, technical parameters (volume, lithology, mechanism and others), displacement rates, possible consequences, hazard and risk classification and so on. Feature classes and tables linked to the main feature class include the run-out area, the area effected by secondary effects, the hazard and risk classification, subareas and scenarios of an unstable rock slope, field observation points, displacement measurement stations, URL links for further documentation and references. The database on unstable rock slopes in Norway will be publicly consultable through the online map service on www.skrednett.no in 2014. Only publicly relevant parts of the database will be shown in the online map service (e.g. processed results of displacement measurements), while more detailed data will not (e.g. raw data of displacement measurements). Factsheets with key information on unstable rock slopes can be automatically generated and downloaded for each site, a municipality, a county or the entire country. Selected data will also be downloadable free of charge. The present database on unstable rock slopes in Norway will further evolve in the coming years as the systematic mapping conducted by the Geological Survey of Norway progresses and as available techniques and tools evolve.
VoxResNet: Deep voxelwise residual networks for brain segmentation from 3D MR images.
Chen, Hao; Dou, Qi; Yu, Lequan; Qin, Jing; Heng, Pheng-Ann
2018-04-15
Segmentation of key brain tissues from 3D medical images is of great significance for brain disease diagnosis, progression assessment and monitoring of neurologic conditions. While manual segmentation is time-consuming, laborious, and subjective, automated segmentation is quite challenging due to the complicated anatomical environment of brain and the large variations of brain tissues. We propose a novel voxelwise residual network (VoxResNet) with a set of effective training schemes to cope with this challenging problem. The main merit of residual learning is that it can alleviate the degradation problem when training a deep network so that the performance gains achieved by increasing the network depth can be fully leveraged. With this technique, our VoxResNet is built with 25 layers, and hence can generate more representative features to deal with the large variations of brain tissues than its rivals using hand-crafted features or shallower networks. In order to effectively train such a deep network with limited training data for brain segmentation, we seamlessly integrate multi-modality and multi-level contextual information into our network, so that the complementary information of different modalities can be harnessed and features of different scales can be exploited. Furthermore, an auto-context version of the VoxResNet is proposed by combining the low-level image appearance features, implicit shape information, and high-level context together for further improving the segmentation performance. Extensive experiments on the well-known benchmark (i.e., MRBrainS) of brain segmentation from 3D magnetic resonance (MR) images corroborated the efficacy of the proposed VoxResNet. Our method achieved the first place in the challenge out of 37 competitors including several state-of-the-art brain segmentation methods. Our method is inherently general and can be readily applied as a powerful tool to many brain-related studies, where accurate segmentation of brain structures is critical. Copyright © 2017 Elsevier Inc. All rights reserved.
Diffusion Tensor Image Registration Using Hybrid Connectivity and Tensor Features
Wang, Qian; Yap, Pew-Thian; Wu, Guorong; Shen, Dinggang
2014-01-01
Most existing diffusion tensor imaging (DTI) registration methods estimate structural correspondences based on voxelwise matching of tensors. The rich connectivity information that is given by DTI, however, is often neglected. In this article, we propose to integrate complementary information given by connectivity features and tensor features for improved registration accuracy. To utilize connectivity information, we place multiple anchors representing different brain anatomies in the image space, and define the connectivity features for each voxel as the geodesic distances from all anchors to the voxel under consideration. The geodesic distance, which is computed in relation to the tensor field, encapsulates information of brain connectivity. We also extract tensor features for every voxel to reflect the local statistics of tensors in its neighborhood. We then combine both connectivity features and tensor features for registration of tensor images. From the images, landmarks are selected automatically and their correspondences are determined based on their connectivity and tensor feature vectors. The deformation field that deforms one tensor image to the other is iteratively estimated and optimized according to the landmarks and their associated correspondences. Experimental results show that, by using connectivity features and tensor features simultaneously, registration accuracy is increased substantially compared with the cases using either type of features alone. PMID:24293159
ERIC Educational Resources Information Center
Nyachwaya, James M.; Gillaspie, Merry
2016-01-01
The goals of this study were (1) determine the prevalence of various features of representations in five general chemistry textbooks used in the United States, and (2) use cognitive load theory to draw implications of the various features of analyzed representations. We adapted the Graphical Analysis Protocol (GAP) (Slough et al., 2010) to look at…
ERIC Educational Resources Information Center
Kelly, Debbie M.; Bischof, Walter F.
2008-01-01
We investigated how human adults orient in enclosed virtual environments, when discrete landmark information is not available and participants have to rely on geometric and featural information on the environmental surfaces. In contrast to earlier studies, where, for women, the featural information from discrete landmarks overshadowed the encoding…
Improved disparity map analysis through the fusion of monocular image segmentations
NASA Technical Reports Server (NTRS)
Perlant, Frederic P.; Mckeown, David M.
1991-01-01
The focus is to examine how estimates of three dimensional scene structure, as encoded in a scene disparity map, can be improved by the analysis of the original monocular imagery. The utilization of surface illumination information is provided by the segmentation of the monocular image into fine surface patches of nearly homogeneous intensity to remove mismatches generated during stereo matching. These patches are used to guide a statistical analysis of the disparity map based on the assumption that such patches correspond closely with physical surfaces in the scene. Such a technique is quite independent of whether the initial disparity map was generated by automated area-based or feature-based stereo matching. Stereo analysis results are presented on a complex urban scene containing various man-made and natural features. This scene contains a variety of problems including low building height with respect to the stereo baseline, buildings and roads in complex terrain, and highly textured buildings and terrain. The improvements are demonstrated due to monocular fusion with a set of different region-based image segmentations. The generality of this approach to stereo analysis and its utility in the development of general three dimensional scene interpretation systems are also discussed.
Features of standardized nursing terminology sets in Japan.
Sagara, Kaoru; Abe, Akinori; Ozaku, Hiromi Itoh; Kuwahara, Noriaki; Kogure, Kiyoshi
2006-01-01
This paper reports the features and relationships between standardizes nursing terminology sets used in Japan. First, we analyzed the common parts in five standardized nursing terminology sets: the Japan Nursing Practice Standard Master (JNPSM) that includes the names of nursing activities and is built by the Medical Information Center Development Center (MEDIS-DC); the labels of the Japan Classification of Nursing Practice (JCNP), built by the term advisory committee in the Japan Academy of Nursing Science; the labels of the International Classification for Nursing Practice (ICNP) translated to Japanese; the labels, domain names, and class names of the North American Nursing Diagnosis Association (NANDA) Nursing Diagnoses 2003-2004 translated to Japanese; and the terms included in the labels of Nursing Interventions Classification (NIC) translated to Japanese. Then we compared them with terms in a thesaurus dictionary, the Bunrui Goihyo, that contains general Japanese words and is built by the National Institute for Japanese Language. 1) the level of interchangeability between four standardized nursing terminology sets is quite low; 2) abbreviations and katakana words are frequently used to express nursing activities; 3) general Japanese words are usually used to express the status or situation of patients.
How do general practitioners use 'safety netting' in acutely ill children?
Bertheloot, Karen; Deraeve, Pieterjan; Vermandere, Mieke; Aertgeerts, Bert; Lemiengre, Marieke; De Sutter, An; Buntinx, Frank; Verbakel, Jan Y
2016-01-01
'Safety netting' advice allows general practitioners (GPs) to cope with diagnostic uncertainty in primary care. It informs patients on 'red flag' features and when and how to seek further help. There is, however, insufficient evidence to support useful choices regarding 'safety netting' procedures. To explore how GPs apply 'safety netting' in acutely ill children in Flanders. We designed a qualitative study consisting of semi-structured interviews with 37 GPs across Flanders. Two researchers performed qualitative analysis based on grounded theory components. Although unfamiliar with the term, GPs perform 'safety netting' in every acutely ill child, guided by their intuition without the use of specific guidelines. They communicate 'red flag' features, expected time course of illness and how and when to re-consult and try to tailor their advice to the context, patient and specific illness. Overall, GPs perceive 'safety netting' as an important element of the consultation, acknowledging personal and parental limitations, such as parents' interpretation of their advice. GPs do not feel a need for any form of support in the near future. GPs apply 'safety netting' intuitively and tailor the content. Further research should focus on the impact of 'safety netting' on morbidity and how the advice is conveyed to parents.
Vowel bias in Danish word-learning: processing biases are language-specific.
Højen, Anders; Nazzi, Thierry
2016-01-01
The present study explored whether the phonological bias favoring consonants found in French-learning infants and children when learning new words (Havy & Nazzi, 2009; Nazzi, 2005) is language-general, as proposed by Nespor, Peña and Mehler (2003), or varies across languages, perhaps as a function of the phonological or lexical properties of the language in acquisition. To do so, we used the interactive word-learning task set up by Havy and Nazzi (2009), teaching Danish-learning 20-month-olds pairs of phonetically similar words that contrasted either on one of their consonants or one of their vowels, by either one or two phonological features. Danish was chosen because it has more vowels than consonants, and is characterized by extensive consonant lenition. Both phenomena could disfavor a consonant bias. Evidence of word-learning was found only for vocalic information, irrespective of whether one or two phonological features were changed. The implication of these findings is that the phonological biases found in early lexical processing are not language-general but develop during language acquisition, depending on the phonological or lexical properties of the native language. © 2015 John Wiley & Sons Ltd.
Improving Water Resources System Operation by Direct Use of Hydroclimatic Information
NASA Astrophysics Data System (ADS)
Castelletti, A.; Pianosi, F.
2011-12-01
It is generally agreed that more information translates into better decisions. For instance, the availability of inflow predictions can improve reservoir operation; soil moisture data can be exploited to increase irrigation efficiency; etc. However, beyond this general statement, many theoretical and practical questions remain open. Provided that not all information sources are equally relevant, how does their value depend on the physical features of the water system and on the purposes of the system operation? What is the minimum lead time needed for anticipatory management to be effective? How does uncertainty in the information propagates through the modelling chain from hydroclimatic data through descriptive and decision models, and finally affect the decision? Is the data-predictions-decision paradigm truly effective or would it be better to directly use hydroclimatic data to take optimal decisions, skipping the intermediate step of hydrological forecasting? In this work we investigate these issues by application to the management of a complex water system in Northern Vietnam, characterized by multiple, conflicting objectives including hydropower production, flood control and water supply. First, we quantify the value of hydroclimatic information as the improvement in the system performances that could be attained under the (ideal) assumption of perfect knowledge of all future meteorological and hydrological input. Then, we assess and compare the relevance of different candidate information (meteorological or hydrological observations; ground or remote data; etc.) for the purpose of system operation by novel Input Variable Selection techniques. Finally, we evaluate the performance improvement made possible by the use of such information in re-designing the system operation.
Selective Audiovisual Semantic Integration Enabled by Feature-Selective Attention.
Li, Yuanqing; Long, Jinyi; Huang, Biao; Yu, Tianyou; Wu, Wei; Li, Peijun; Fang, Fang; Sun, Pei
2016-01-13
An audiovisual object may contain multiple semantic features, such as the gender and emotional features of the speaker. Feature-selective attention and audiovisual semantic integration are two brain functions involved in the recognition of audiovisual objects. Humans often selectively attend to one or several features while ignoring the other features of an audiovisual object. Meanwhile, the human brain integrates semantic information from the visual and auditory modalities. However, how these two brain functions correlate with each other remains to be elucidated. In this functional magnetic resonance imaging (fMRI) study, we explored the neural mechanism by which feature-selective attention modulates audiovisual semantic integration. During the fMRI experiment, the subjects were presented with visual-only, auditory-only, or audiovisual dynamical facial stimuli and performed several feature-selective attention tasks. Our results revealed that a distribution of areas, including heteromodal areas and brain areas encoding attended features, may be involved in audiovisual semantic integration. Through feature-selective attention, the human brain may selectively integrate audiovisual semantic information from attended features by enhancing functional connectivity and thus regulating information flows from heteromodal areas to brain areas encoding the attended features.
A Methodology to Seperate and Analyze a Seismic Wide Angle Profile
NASA Astrophysics Data System (ADS)
Weinzierl, Wolfgang; Kopp, Heidrun
2010-05-01
General solutions of inverse problems can often be obtained through the introduction of probability distributions to sample the model space. We present a simple approach of defining an a priori space in a tomographic study and retrieve the velocity-depth posterior distribution by a Monte Carlo method. Utilizing a fitting routine designed for very low statistics to setup and analyze the obtained tomography results, it is possible to statistically separate the velocity-depth model space derived from the inversion of seismic refraction data. An example of a profile acquired in the Lesser Antilles subduction zone reveals the effectiveness of this approach. The resolution analysis of the structural heterogeneity includes a divergence analysis which proves to be capable of dissecting long wide-angle profiles for deep crust and upper mantle studies. The complete information of any parameterised physical system is contained in the a posteriori distribution. Methods for analyzing and displaying key properties of the a posteriori distributions of highly nonlinear inverse problems are therefore essential in the scope of any interpretation. From this study we infer several conclusions concerning the interpretation of the tomographic approach. By calculating a global as well as singular misfits of velocities we are able to map different geological units along a profile. Comparing velocity distributions with the result of a tomographic inversion along the profile we can mimic the subsurface structures in their extent and composition. The possibility of gaining a priori information for seismic refraction analysis by a simple solution to an inverse problem and subsequent resolution of structural heterogeneities through a divergence analysis is a new and simple way of defining a priori space and estimating the a posteriori mean and covariance in singular and general form. The major advantage of a Monte Carlo based approach in our case study is the obtained knowledge of velocity depth distributions. Certainly the decision of where to extract velocity information on the profile for setting up a Monte Carlo ensemble is limiting the a priori space. However, the general conclusion of analyzing the velocity field according to distinct reference distributions gives us the possibility to define the covariance according to any geological unit if we have a priori information on the velocity depth distributions. Using the wide angle data recorded across the Lesser Antilles arc, we are able to resolve a shallow feature like the backstop by a robust and simple divergence analysis. We demonstrate the effectiveness of the new methodology to extract some key features and properties from the inversion results by including information concerning the confidence level of results.
A Dimensionally Aligned Signal Projection for Classification of Unintended Radiated Emissions
Vann, Jason Michael; Karnowski, Thomas P.; Kerekes, Ryan; ...
2017-04-24
Characterization of unintended radiated emissions (URE) from electronic devices plays an important role in many research areas from electromagnetic interference to nonintrusive load monitoring to information system security. URE can provide insights for applications ranging from load disaggregation and energy efficiency to condition-based maintenance of equipment-based upon detected fault conditions. URE characterization often requires subject matter expertise to tailor transforms and feature extractors for the specific electrical devices of interest. We present a novel approach, named dimensionally aligned signal projection (DASP), for projecting aligned signal characteristics that are inherent to the physical implementation of many commercial electronic devices. These projectionsmore » minimize the need for an intimate understanding of the underlying physical circuitry and significantly reduce the number of features required for signal classification. We present three possible DASP algorithms that leverage frequency harmonics, modulation alignments, and frequency peak spacings, along with a two-dimensional image manipulation method for statistical feature extraction. To demonstrate the ability of DASP to generate relevant features from URE, we measured the conducted URE from 14 residential electronic devices using a 2 MS/s collection system. Furthermore, a linear discriminant analysis classifier was trained using DASP generated features and was blind tested resulting in a greater than 90% classification accuracy for each of the DASP algorithms and an accuracy of 99.1% when DASP features are used in combination. Furthermore, we show that a rank reduced feature set of the combined DASP algorithms provides a 98.9% classification accuracy with only three features and outperforms a set of spectral features in terms of general classification as well as applicability across a broad number of devices.« less
A Dimensionally Aligned Signal Projection for Classification of Unintended Radiated Emissions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vann, Jason Michael; Karnowski, Thomas P.; Kerekes, Ryan
Characterization of unintended radiated emissions (URE) from electronic devices plays an important role in many research areas from electromagnetic interference to nonintrusive load monitoring to information system security. URE can provide insights for applications ranging from load disaggregation and energy efficiency to condition-based maintenance of equipment-based upon detected fault conditions. URE characterization often requires subject matter expertise to tailor transforms and feature extractors for the specific electrical devices of interest. We present a novel approach, named dimensionally aligned signal projection (DASP), for projecting aligned signal characteristics that are inherent to the physical implementation of many commercial electronic devices. These projectionsmore » minimize the need for an intimate understanding of the underlying physical circuitry and significantly reduce the number of features required for signal classification. We present three possible DASP algorithms that leverage frequency harmonics, modulation alignments, and frequency peak spacings, along with a two-dimensional image manipulation method for statistical feature extraction. To demonstrate the ability of DASP to generate relevant features from URE, we measured the conducted URE from 14 residential electronic devices using a 2 MS/s collection system. Furthermore, a linear discriminant analysis classifier was trained using DASP generated features and was blind tested resulting in a greater than 90% classification accuracy for each of the DASP algorithms and an accuracy of 99.1% when DASP features are used in combination. Furthermore, we show that a rank reduced feature set of the combined DASP algorithms provides a 98.9% classification accuracy with only three features and outperforms a set of spectral features in terms of general classification as well as applicability across a broad number of devices.« less
Deep graphs—A general framework to represent and analyze heterogeneous complex systems across scales
NASA Astrophysics Data System (ADS)
Traxl, Dominik; Boers, Niklas; Kurths, Jürgen
2016-06-01
Network theory has proven to be a powerful tool in describing and analyzing systems by modelling the relations between their constituent objects. Particularly in recent years, a great progress has been made by augmenting "traditional" network theory in order to account for the multiplex nature of many networks, multiple types of connections between objects, the time-evolution of networks, networks of networks and other intricacies. However, existing network representations still lack crucial features in order to serve as a general data analysis tool. These include, most importantly, an explicit association of information with possibly heterogeneous types of objects and relations, and a conclusive representation of the properties of groups of nodes as well as the interactions between such groups on different scales. In this paper, we introduce a collection of definitions resulting in a framework that, on the one hand, entails and unifies existing network representations (e.g., network of networks and multilayer networks), and on the other hand, generalizes and extends them by incorporating the above features. To implement these features, we first specify the nodes and edges of a finite graph as sets of properties (which are permitted to be arbitrary mathematical objects). Second, the mathematical concept of partition lattices is transferred to the network theory in order to demonstrate how partitioning the node and edge set of a graph into supernodes and superedges allows us to aggregate, compute, and allocate information on and between arbitrary groups of nodes. The derived partition lattice of a graph, which we denote by deep graph, constitutes a concise, yet comprehensive representation that enables the expression and analysis of heterogeneous properties, relations, and interactions on all scales of a complex system in a self-contained manner. Furthermore, to be able to utilize existing network-based methods and models, we derive different representations of multilayer networks from our framework and demonstrate the advantages of our representation. On the basis of the formal framework described here, we provide a rich, fully scalable (and self-explanatory) software package that integrates into the PyData ecosystem and offers interfaces to popular network packages, making it a powerful, general-purpose data analysis toolkit. We exemplify an application of deep graphs using a real world dataset, comprising 16 years of satellite-derived global precipitation measurements. We deduce a deep graph representation of these measurements in order to track and investigate local formations of spatio-temporal clusters of extreme precipitation events.
Traxl, Dominik; Boers, Niklas; Kurths, Jürgen
2016-06-01
Network theory has proven to be a powerful tool in describing and analyzing systems by modelling the relations between their constituent objects. Particularly in recent years, a great progress has been made by augmenting "traditional" network theory in order to account for the multiplex nature of many networks, multiple types of connections between objects, the time-evolution of networks, networks of networks and other intricacies. However, existing network representations still lack crucial features in order to serve as a general data analysis tool. These include, most importantly, an explicit association of information with possibly heterogeneous types of objects and relations, and a conclusive representation of the properties of groups of nodes as well as the interactions between such groups on different scales. In this paper, we introduce a collection of definitions resulting in a framework that, on the one hand, entails and unifies existing network representations (e.g., network of networks and multilayer networks), and on the other hand, generalizes and extends them by incorporating the above features. To implement these features, we first specify the nodes and edges of a finite graph as sets of properties (which are permitted to be arbitrary mathematical objects). Second, the mathematical concept of partition lattices is transferred to the network theory in order to demonstrate how partitioning the node and edge set of a graph into supernodes and superedges allows us to aggregate, compute, and allocate information on and between arbitrary groups of nodes. The derived partition lattice of a graph, which we denote by deep graph, constitutes a concise, yet comprehensive representation that enables the expression and analysis of heterogeneous properties, relations, and interactions on all scales of a complex system in a self-contained manner. Furthermore, to be able to utilize existing network-based methods and models, we derive different representations of multilayer networks from our framework and demonstrate the advantages of our representation. On the basis of the formal framework described here, we provide a rich, fully scalable (and self-explanatory) software package that integrates into the PyData ecosystem and offers interfaces to popular network packages, making it a powerful, general-purpose data analysis toolkit. We exemplify an application of deep graphs using a real world dataset, comprising 16 years of satellite-derived global precipitation measurements. We deduce a deep graph representation of these measurements in order to track and investigate local formations of spatio-temporal clusters of extreme precipitation events.