Sample records for multiple input feature

  1. Multiple feature extraction by using simultaneous wavelet transforms

    NASA Astrophysics Data System (ADS)

    Mazzaferri, Javier; Ledesma, Silvia; Iemmi, Claudio

    2003-07-01

    We propose here a method to optically perform multiple feature extraction by using wavelet transforms. The method is based on obtaining the optical correlation by means of a Vander Lugt architecture, where the scene and the filter are displayed on spatial light modulators (SLMs). Multiple phase filters containing the information about the features that we are interested in extracting are designed and then displayed on an SLM working in phase mostly mode. We have designed filters to simultaneously detect edges and corners or different characteristic frequencies contained in the input scene. Simulated and experimental results are shown.

  2. A multiple-feature and multiple-kernel scene segmentation algorithm for humanoid robot.

    PubMed

    Liu, Zhi; Xu, Shuqiong; Zhang, Yun; Chen, Chun Lung Philip

    2014-11-01

    This technical correspondence presents a multiple-feature and multiple-kernel support vector machine (MFMK-SVM) methodology to achieve a more reliable and robust segmentation performance for humanoid robot. The pixel wise intensity, gradient, and C1 SMF features are extracted via the local homogeneity model and Gabor filter, which would be used as inputs of MFMK-SVM model. It may provide multiple features of the samples for easier implementation and efficient computation of MFMK-SVM model. A new clustering method, which is called feature validity-interval type-2 fuzzy C-means (FV-IT2FCM) clustering algorithm, is proposed by integrating a type-2 fuzzy criterion in the clustering optimization process to improve the robustness and reliability of clustering results by the iterative optimization. Furthermore, the clustering validity is employed to select the training samples for the learning of the MFMK-SVM model. The MFMK-SVM scene segmentation method is able to fully take advantage of the multiple features of scene image and the ability of multiple kernels. Experiments on the BSDS dataset and real natural scene images demonstrate the superior performance of our proposed method.

  3. An interactive framework for acquiring vision models of 3-D objects from 2-D images.

    PubMed

    Motai, Yuichi; Kak, Avinash

    2004-02-01

    This paper presents a human-computer interaction (HCI) framework for building vision models of three-dimensional (3-D) objects from their two-dimensional (2-D) images. Our framework is based on two guiding principles of HCI: 1) provide the human with as much visual assistance as possible to help the human make a correct input; and 2) verify each input provided by the human for its consistency with the inputs previously provided. For example, when stereo correspondence information is elicited from a human, his/her job is facilitated by superimposing epipolar lines on the images. Although that reduces the possibility of error in the human marked correspondences, such errors are not entirely eliminated because there can be multiple candidate points close together for complex objects. For another example, when pose-to-pose correspondence is sought from a human, his/her job is made easier by allowing the human to rotate the partial model constructed in the previous pose in relation to the partial model for the current pose. While this facility reduces the incidence of human-supplied pose-to-pose correspondence errors, such errors cannot be eliminated entirely because of confusion created when multiple candidate features exist close together. Each input provided by the human is therefore checked against the previous inputs by invoking situation-specific constraints. Different types of constraints (and different human-computer interaction protocols) are needed for the extraction of polygonal features and for the extraction of curved features. We will show results on both polygonal objects and object containing curved features.

  4. Achromatical Optical Correlator

    NASA Technical Reports Server (NTRS)

    Chao, Tien-Hsin; Liu, Hua-Kuang

    1989-01-01

    Signal-to-noise ratio exceeds that of monochromatic correlator. Achromatical optical correlator uses multiple-pinhole diffraction of dispersed white light to form superposed multiple correlations of input and reference images in output plane. Set of matched spatial filters made by multiple-exposure holographic process, each exposure using suitably-scaled input image and suitable angle of reference beam. Recording-aperture mask translated to appropriate horizontal position for each exposure. Noncoherent illumination suitable for applications involving recognition of color and determination of scale. When fully developed achromatical correlators will be useful for recognition of patterns; for example, in industrial inspection and search for selected features in aerial photographs.

  5. Multiple channel programmable coincidence counter

    DOEpatents

    Arnone, Gaetano J.

    1990-01-01

    A programmable digital coincidence counter having multiple channels and featuring minimal dead time. Neutron detectors supply electrical pulses to a synchronizing circuit which in turn inputs derandomized pulses to an adding circuit. A random access memory circuit connected as a programmable length shift register receives and shifts the sum of the pulses, and outputs to a serializer. A counter is input by the adding circuit and downcounted by the seralizer, one pulse at a time. The decoded contents of the counter after each decrement is output to scalers.

  6. Electrosensory Midbrain Neurons Display Feature Invariant Responses to Natural Communication Stimuli.

    PubMed

    Aumentado-Armstrong, Tristan; Metzen, Michael G; Sproule, Michael K J; Chacron, Maurice J

    2015-10-01

    Neurons that respond selectively but in an invariant manner to a given feature of natural stimuli have been observed across species and systems. Such responses emerge in higher brain areas, thereby suggesting that they occur by integrating afferent input. However, the mechanisms by which such integration occurs are poorly understood. Here we show that midbrain electrosensory neurons can respond selectively and in an invariant manner to heterogeneity in behaviorally relevant stimulus waveforms. Such invariant responses were not seen in hindbrain electrosensory neurons providing afferent input to these midbrain neurons, suggesting that response invariance results from nonlinear integration of such input. To test this hypothesis, we built a model based on the Hodgkin-Huxley formalism that received realistic afferent input. We found that multiple combinations of parameter values could give rise to invariant responses matching those seen experimentally. Our model thus shows that there are multiple solutions towards achieving invariant responses and reveals how subthreshold membrane conductances help promote robust and invariant firing in response to heterogeneous stimulus waveforms associated with behaviorally relevant stimuli. We discuss the implications of our findings for the electrosensory and other systems.

  7. MULGRES: a computer program for stepwise multiple regression analysis

    Treesearch

    A. Jeff Martin

    1971-01-01

    MULGRES is a computer program source deck that is designed for multiple regression analysis employing the technique of stepwise deletion in the search for most significant variables. The features of the program, along with inputs and outputs, are briefly described, with a note on machine compatibility.

  8. Slow feature analysis: unsupervised learning of invariances.

    PubMed

    Wiskott, Laurenz; Sejnowski, Terrence J

    2002-04-01

    Invariant features of temporally varying signals are useful for analysis and classification. Slow feature analysis (SFA) is a new method for learning invariant or slowly varying features from a vectorial input signal. It is based on a nonlinear expansion of the input signal and application of principal component analysis to this expanded signal and its time derivative. It is guaranteed to find the optimal solution within a family of functions directly and can learn to extract a large number of decorrelated features, which are ordered by their degree of invariance. SFA can be applied hierarchically to process high-dimensional input signals and extract complex features. SFA is applied first to complex cell tuning properties based on simple cell output, including disparity and motion. Then more complicated input-output functions are learned by repeated application of SFA. Finally, a hierarchical network of SFA modules is presented as a simple model of the visual system. The same unstructured network can learn translation, size, rotation, contrast, or, to a lesser degree, illumination invariance for one-dimensional objects, depending on only the training stimulus. Surprisingly, only a few training objects suffice to achieve good generalization to new objects. The generated representation is suitable for object recognition. Performance degrades if the network is trained to learn multiple invariances simultaneously.

  9. Explosive hazard detection using MIMO forward-looking ground penetrating radar

    NASA Astrophysics Data System (ADS)

    Shaw, Darren; Ho, K. C.; Stone, Kevin; Keller, James M.; Popescu, Mihail; Anderson, Derek T.; Luke, Robert H.; Burns, Brian

    2015-05-01

    This paper proposes a machine learning algorithm for subsurface object detection on multiple-input-multiple-output (MIMO) forward-looking ground-penetrating radar (FLGPR). By detecting hazards using FLGPR, standoff distances of up to tens of meters can be acquired, but this is at the degradation of performance due to high false alarm rates. The proposed system utilizes an anomaly detection prescreener to identify potential object locations. Alarm locations have multiple one-dimensional (ML) spectral features, two-dimensional (2D) spectral features, and log-Gabor statistic features extracted. The ability of these features to reduce the number of false alarms and increase the probability of detection is evaluated for both co-polarizations present in the Akela MIMO array. Classification is performed by a Support Vector Machine (SVM) with lane-based cross-validation for training and testing. Class imbalance and optimized SVM kernel parameters are considered during classifier training.

  10. Adaptive Backstepping-Based Neural Tracking Control for MIMO Nonlinear Switched Systems Subject to Input Delays.

    PubMed

    Niu, Ben; Li, Lu

    2018-06-01

    This brief proposes a new neural-network (NN)-based adaptive output tracking control scheme for a class of disturbed multiple-input multiple-output uncertain nonlinear switched systems with input delays. By combining the universal approximation ability of radial basis function NNs and adaptive backstepping recursive design with an improved multiple Lyapunov function (MLF) scheme, a novel adaptive neural output tracking controller design method is presented for the switched system. The feature of the developed design is that different coordinate transformations are adopted to overcome the conservativeness caused by adopting a common coordinate transformation for all subsystems. It is shown that all the variables of the resulting closed-loop system are semiglobally uniformly ultimately bounded under a class of switching signals in the presence of MLF and that the system output can follow the desired reference signal. To demonstrate the practicability of the obtained result, an adaptive neural output tracking controller is designed for a mass-spring-damper system.

  11. Dynamics of feature categorization.

    PubMed

    Martí, Daniel; Rinzel, John

    2013-01-01

    In visual and auditory scenes, we are able to identify shared features among sensory objects and group them according to their similarity. This grouping is preattentive and fast and is thought of as an elementary form of categorization by which objects sharing similar features are clustered in some abstract perceptual space. It is unclear what neuronal mechanisms underlie this fast categorization. Here we propose a neuromechanistic model of fast feature categorization based on the framework of continuous attractor networks. The mechanism for category formation does not rely on learning and is based on biologically plausible assumptions, for example, the existence of populations of neurons tuned to feature values, feature-specific interactions, and subthreshold-evoked responses upon the presentation of single objects. When the network is presented with a sequence of stimuli characterized by some feature, the network sums the evoked responses and provides a running estimate of the distribution of features in the input stream. If the distribution of features is structured into different components or peaks (i.e., is multimodal), recurrent excitation amplifies the response of activated neurons, and categories are singled out as emerging localized patterns of elevated neuronal activity (bumps), centered at the centroid of each cluster. The emergence of bump states through sequential, subthreshold activation and the dependence on input statistics is a novel application of attractor networks. We show that the extraction and representation of multiple categories are facilitated by the rich attractor structure of the network, which can sustain multiple stable activity patterns for a robust range of connectivity parameters compatible with cortical physiology.

  12. Extraction of texture features with a multiresolution neural network

    NASA Astrophysics Data System (ADS)

    Lepage, Richard; Laurendeau, Denis; Gagnon, Roger A.

    1992-09-01

    Texture is an important surface characteristic. Many industrial materials such as wood, textile, or paper are best characterized by their texture. Detection of defaults occurring on such materials or classification for quality control anD matching can be carried out through careful texture analysis. A system for the classification of pieces of wood used in the furniture industry is proposed. This paper is concerned with a neural network implementation of the features extraction and classification components of the proposed system. Texture appears differently depending at which spatial scale it is observed. A complete description of a texture thus implies an analysis at several spatial scales. We propose a compact pyramidal representation of the input image for multiresolution analysis. The feature extraction system is implemented on a multilayer artificial neural network. Each level of the pyramid, which is a representation of the input image at a given spatial resolution scale, is mapped into a layer of the neural network. A full resolution texture image is input at the base of the pyramid and a representation of the texture image at multiple resolutions is generated by the feedforward pyramid structure of the neural network. The receptive field of each neuron at a given pyramid level is preprogrammed as a discrete Gaussian low-pass filter. Meaningful characteristics of the textured image must be extracted if a good resolving power of the classifier must be achieved. Local dominant orientation is the principal feature which is extracted from the textured image. Local edge orientation is computed with a Sobel mask at four orientation angles (multiple of (pi) /4). The resulting intrinsic image, that is, the local dominant orientation image, is fed to the texture classification neural network. The classification network is a three-layer feedforward back-propagation neural network.

  13. Deep learning of support vector machines with class probability output networks.

    PubMed

    Kim, Sangwook; Yu, Zhibin; Kil, Rhee Man; Lee, Minho

    2015-04-01

    Deep learning methods endeavor to learn features automatically at multiple levels and allow systems to learn complex functions mapping from the input space to the output space for the given data. The ability to learn powerful features automatically is increasingly important as the volume of data and range of applications of machine learning methods continues to grow. This paper proposes a new deep architecture that uses support vector machines (SVMs) with class probability output networks (CPONs) to provide better generalization power for pattern classification problems. As a result, deep features are extracted without additional feature engineering steps, using multiple layers of the SVM classifiers with CPONs. The proposed structure closely approaches the ideal Bayes classifier as the number of layers increases. Using a simulation of classification problems, the effectiveness of the proposed method is demonstrated. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. Multisensory connections of monkey auditory cerebral cortex

    PubMed Central

    Smiley, John F.; Falchier, Arnaud

    2009-01-01

    Functional studies have demonstrated multisensory responses in auditory cortex, even in the primary and early auditory association areas. The features of somatosensory and visual responses in auditory cortex suggest that they are involved in multiple processes including spatial, temporal and object-related perception. Tract tracing studies in monkeys have demonstrated several potential sources of somatosensory and visual inputs to auditory cortex. These include potential somatosensory inputs from the retroinsular (RI) and granular insula (Ig) cortical areas, and from the thalamic posterior (PO) nucleus. Potential sources of visual responses include peripheral field representations of areas V2 and prostriata, as well as the superior temporal polysensory area (STP) in the superior temporal sulcus, and the magnocellular medial geniculate thalamic nucleus (MGm). Besides these sources, there are several other thalamic, limbic and cortical association structures that have multisensory responses and may contribute cross-modal inputs to auditory cortex. These connections demonstrated by tract tracing provide a list of potential inputs, but in most cases their significance has not been confirmed by functional experiments. It is possible that the somatosensory and visual modulation of auditory cortex are each mediated by multiple extrinsic sources. PMID:19619628

  15. Prediction of beta-turns and beta-turn types by a novel bidirectional Elman-type recurrent neural network with multiple output layers (MOLEBRNN).

    PubMed

    Kirschner, Andreas; Frishman, Dmitrij

    2008-10-01

    Prediction of beta-turns from amino acid sequences has long been recognized as an important problem in structural bioinformatics due to their frequent occurrence as well as their structural and functional significance. Because various structural features of proteins are intercorrelated, secondary structure information has been often employed as an additional input for machine learning algorithms while predicting beta-turns. Here we present a novel bidirectional Elman-type recurrent neural network with multiple output layers (MOLEBRNN) capable of predicting multiple mutually dependent structural motifs and demonstrate its efficiency in recognizing three aspects of protein structure: beta-turns, beta-turn types, and secondary structure. The advantage of our method compared to other predictors is that it does not require any external input except for sequence profiles because interdependencies between different structural features are taken into account implicitly during the learning process. In a sevenfold cross-validation experiment on a standard test dataset our method exhibits the total prediction accuracy of 77.9% and the Mathew's Correlation Coefficient of 0.45, the highest performance reported so far. It also outperforms other known methods in delineating individual turn types. We demonstrate how simultaneous prediction of multiple targets influences prediction performance on single targets. The MOLEBRNN presented here is a generic method applicable in a variety of research fields where multiple mutually depending target classes need to be predicted. http://webclu.bio.wzw.tum.de/predator-web/.

  16. Context-based automated defect classification system using multiple morphological masks

    DOEpatents

    Gleason, Shaun S.; Hunt, Martin A.; Sari-Sarraf, Hamed

    2002-01-01

    Automatic detection of defects during the fabrication of semiconductor wafers is largely automated, but the classification of those defects is still performed manually by technicians. This invention includes novel digital image analysis techniques that generate unique feature vector descriptions of semiconductor defects as well as classifiers that use these descriptions to automatically categorize the defects into one of a set of pre-defined classes. Feature extraction techniques based on multiple-focus images, multiple-defect mask images, and segmented semiconductor wafer images are used to create unique feature-based descriptions of the semiconductor defects. These feature-based defect descriptions are subsequently classified by a defect classifier into categories that depend on defect characteristics and defect contextual information, that is, the semiconductor process layer(s) with which the defect comes in contact. At the heart of the system is a knowledge database that stores and distributes historical semiconductor wafer and defect data to guide the feature extraction and classification processes. In summary, this invention takes as its input a set of images containing semiconductor defect information, and generates as its output a classification for the defect that describes not only the defect itself, but also the location of that defect with respect to the semiconductor process layers.

  17. Random Deep Belief Networks for Recognizing Emotions from Speech Signals.

    PubMed

    Wen, Guihua; Li, Huihui; Huang, Jubing; Li, Danyang; Xun, Eryang

    2017-01-01

    Now the human emotions can be recognized from speech signals using machine learning methods; however, they are challenged by the lower recognition accuracies in real applications due to lack of the rich representation ability. Deep belief networks (DBN) can automatically discover the multiple levels of representations in speech signals. To make full of its advantages, this paper presents an ensemble of random deep belief networks (RDBN) method for speech emotion recognition. It firstly extracts the low level features of the input speech signal and then applies them to construct lots of random subspaces. Each random subspace is then provided for DBN to yield the higher level features as the input of the classifier to output an emotion label. All outputted emotion labels are then fused through the majority voting to decide the final emotion label for the input speech signal. The conducted experimental results on benchmark speech emotion databases show that RDBN has better accuracy than the compared methods for speech emotion recognition.

  18. Random Deep Belief Networks for Recognizing Emotions from Speech Signals

    PubMed Central

    Li, Huihui; Huang, Jubing; Li, Danyang; Xun, Eryang

    2017-01-01

    Now the human emotions can be recognized from speech signals using machine learning methods; however, they are challenged by the lower recognition accuracies in real applications due to lack of the rich representation ability. Deep belief networks (DBN) can automatically discover the multiple levels of representations in speech signals. To make full of its advantages, this paper presents an ensemble of random deep belief networks (RDBN) method for speech emotion recognition. It firstly extracts the low level features of the input speech signal and then applies them to construct lots of random subspaces. Each random subspace is then provided for DBN to yield the higher level features as the input of the classifier to output an emotion label. All outputted emotion labels are then fused through the majority voting to decide the final emotion label for the input speech signal. The conducted experimental results on benchmark speech emotion databases show that RDBN has better accuracy than the compared methods for speech emotion recognition. PMID:28356908

  19. Classifying magnetic resonance image modalities with convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Remedios, Samuel; Pham, Dzung L.; Butman, John A.; Roy, Snehashis

    2018-02-01

    Magnetic Resonance (MR) imaging allows the acquisition of images with different contrast properties depending on the acquisition protocol and the magnetic properties of tissues. Many MR brain image processing techniques, such as tissue segmentation, require multiple MR contrasts as inputs, and each contrast is treated differently. Thus it is advantageous to automate the identification of image contrasts for various purposes, such as facilitating image processing pipelines, and managing and maintaining large databases via content-based image retrieval (CBIR). Most automated CBIR techniques focus on a two-step process: extracting features from data and classifying the image based on these features. We present a novel 3D deep convolutional neural network (CNN)- based method for MR image contrast classification. The proposed CNN automatically identifies the MR contrast of an input brain image volume. Specifically, we explored three classification problems: (1) identify T1-weighted (T1-w), T2-weighted (T2-w), and fluid-attenuated inversion recovery (FLAIR) contrasts, (2) identify pre vs postcontrast T1, (3) identify pre vs post-contrast FLAIR. A total of 3418 image volumes acquired from multiple sites and multiple scanners were used. To evaluate each task, the proposed model was trained on 2137 images and tested on the remaining 1281 images. Results showed that image volumes were correctly classified with 97.57% accuracy.

  20. Preserving information in neural transmission.

    PubMed

    Sincich, Lawrence C; Horton, Jonathan C; Sharpee, Tatyana O

    2009-05-13

    Along most neural pathways, the spike trains transmitted from one neuron to the next are altered. In the process, neurons can either achieve a more efficient stimulus representation, or extract some biologically important stimulus parameter, or succeed at both. We recorded the inputs from single retinal ganglion cells and the outputs from connected lateral geniculate neurons in the macaque to examine how visual signals are relayed from retina to cortex. We found that geniculate neurons re-encoded multiple temporal stimulus features to yield output spikes that carried more information about stimuli than was available in each input spike. The coding transformation of some relay neurons occurred with no decrement in information rate, despite output spike rates that averaged half the input spike rates. This preservation of transmitted information was achieved by the short-term summation of inputs that geniculate neurons require to spike. A reduced model of the retinal and geniculate visual responses, based on two stimulus features and their associated nonlinearities, could account for >85% of the total information available in the spike trains and the preserved information transmission. These results apply to neurons operating on a single time-varying input, suggesting that synaptic temporal integration can alter the temporal receptive field properties to create a more efficient representation of visual signals in the thalamus than the retina.

  1. Sensory noise predicts divisive reshaping of receptive fields

    PubMed Central

    Deneve, Sophie; Gutkin, Boris

    2017-01-01

    In order to respond reliably to specific features of their environment, sensory neurons need to integrate multiple incoming noisy signals. Crucially, they also need to compete for the interpretation of those signals with other neurons representing similar features. The form that this competition should take depends critically on the noise corrupting these signals. In this study we show that for the type of noise commonly observed in sensory systems, whose variance scales with the mean signal, sensory neurons should selectively divide their input signals by their predictions, suppressing ambiguous cues while amplifying others. Any change in the stimulus context alters which inputs are suppressed, leading to a deep dynamic reshaping of neural receptive fields going far beyond simple surround suppression. Paradoxically, these highly variable receptive fields go alongside and are in fact required for an invariant representation of external sensory features. In addition to offering a normative account of context-dependent changes in sensory responses, perceptual inference in the presence of signal-dependent noise accounts for ubiquitous features of sensory neurons such as divisive normalization, gain control and contrast dependent temporal dynamics. PMID:28622330

  2. Sensory noise predicts divisive reshaping of receptive fields.

    PubMed

    Chalk, Matthew; Masset, Paul; Deneve, Sophie; Gutkin, Boris

    2017-06-01

    In order to respond reliably to specific features of their environment, sensory neurons need to integrate multiple incoming noisy signals. Crucially, they also need to compete for the interpretation of those signals with other neurons representing similar features. The form that this competition should take depends critically on the noise corrupting these signals. In this study we show that for the type of noise commonly observed in sensory systems, whose variance scales with the mean signal, sensory neurons should selectively divide their input signals by their predictions, suppressing ambiguous cues while amplifying others. Any change in the stimulus context alters which inputs are suppressed, leading to a deep dynamic reshaping of neural receptive fields going far beyond simple surround suppression. Paradoxically, these highly variable receptive fields go alongside and are in fact required for an invariant representation of external sensory features. In addition to offering a normative account of context-dependent changes in sensory responses, perceptual inference in the presence of signal-dependent noise accounts for ubiquitous features of sensory neurons such as divisive normalization, gain control and contrast dependent temporal dynamics.

  3. The impact of attentional, linguistic, and visual features during object naming

    PubMed Central

    Clarke, Alasdair D. F.; Coco, Moreno I.; Keller, Frank

    2013-01-01

    Object detection and identification are fundamental to human vision, and there is mounting evidence that objects guide the allocation of visual attention. However, the role of objects in tasks involving multiple modalities is less clear. To address this question, we investigate object naming, a task in which participants have to verbally identify objects they see in photorealistic scenes. We report an eye-tracking study that investigates which features (attentional, visual, and linguistic) influence object naming. We find that the amount of visual attention directed toward an object, its position and saliency, along with linguistic factors such as word frequency, animacy, and semantic proximity, significantly influence whether the object will be named or not. We then ask how features from different modalities are combined during naming, and find significant interactions between saliency and position, saliency and linguistic features, and attention and position. We conclude that when the cognitive system performs tasks such as object naming, it uses input from one modality to constraint or enhance the processing of other modalities, rather than processing each input modality independently. PMID:24379792

  4. Evolution of Bow-Tie Architectures in Biology

    PubMed Central

    Friedlander, Tamar; Mayo, Avraham E.; Tlusty, Tsvi; Alon, Uri

    2015-01-01

    Bow-tie or hourglass structure is a common architectural feature found in many biological systems. A bow-tie in a multi-layered structure occurs when intermediate layers have much fewer components than the input and output layers. Examples include metabolism where a handful of building blocks mediate between multiple input nutrients and multiple output biomass components, and signaling networks where information from numerous receptor types passes through a small set of signaling pathways to regulate multiple output genes. Little is known, however, about how bow-tie architectures evolve. Here, we address the evolution of bow-tie architectures using simulations of multi-layered systems evolving to fulfill a given input-output goal. We find that bow-ties spontaneously evolve when the information in the evolutionary goal can be compressed. Mathematically speaking, bow-ties evolve when the rank of the input-output matrix describing the evolutionary goal is deficient. The maximal compression possible (the rank of the goal) determines the size of the narrowest part of the network—that is the bow-tie. A further requirement is that a process is active to reduce the number of links in the network, such as product-rule mutations, otherwise a non-bow-tie solution is found in the evolutionary simulations. This offers a mechanism to understand a common architectural principle of biological systems, and a way to quantitate the effective rank of the goals under which they evolved. PMID:25798588

  5. Entorhinal-CA3 Dual-Input Control of Spike Timing in the Hippocampus by Theta-Gamma Coupling.

    PubMed

    Fernández-Ruiz, Antonio; Oliva, Azahara; Nagy, Gergő A; Maurer, Andrew P; Berényi, Antal; Buzsáki, György

    2017-03-08

    Theta-gamma phase coupling and spike timing within theta oscillations are prominent features of the hippocampus and are often related to navigation and memory. However, the mechanisms that give rise to these relationships are not well understood. Using high spatial resolution electrophysiology, we investigated the influence of CA3 and entorhinal inputs on the timing of CA1 neurons. The theta-phase preference and excitatory strength of the afferent CA3 and entorhinal inputs effectively timed the principal neuron activity, as well as regulated distinct CA1 interneuron populations in multiple tasks and behavioral states. Feedback potentiation of distal dendritic inhibition by CA1 place cells attenuated the excitatory entorhinal input at place field entry, coupled with feedback depression of proximal dendritic and perisomatic inhibition, allowing the CA3 input to gain control toward the exit. Thus, upstream inputs interact with local mechanisms to determine theta-phase timing of hippocampal neurons to support memory and spatial navigation. Copyright © 2017 Elsevier Inc. All rights reserved.

  6. Stability versus neuronal specialization for STDP: long-tail weight distributions solve the dilemma.

    PubMed

    Gilson, Matthieu; Fukai, Tomoki

    2011-01-01

    Spike-timing-dependent plasticity (STDP) modifies the weight (or strength) of synaptic connections between neurons and is considered to be crucial for generating network structure. It has been observed in physiology that, in addition to spike timing, the weight update also depends on the current value of the weight. The functional implications of this feature are still largely unclear. Additive STDP gives rise to strong competition among synapses, but due to the absence of weight dependence, it requires hard boundaries to secure the stability of weight dynamics. Multiplicative STDP with linear weight dependence for depression ensures stability, but it lacks sufficiently strong competition required to obtain a clear synaptic specialization. A solution to this stability-versus-function dilemma can be found with an intermediate parametrization between additive and multiplicative STDP. Here we propose a novel solution to the dilemma, named log-STDP, whose key feature is a sublinear weight dependence for depression. Due to its specific weight dependence, this new model can produce significantly broad weight distributions with no hard upper bound, similar to those recently observed in experiments. Log-STDP induces graded competition between synapses, such that synapses receiving stronger input correlations are pushed further in the tail of (very) large weights. Strong weights are functionally important to enhance the neuronal response to synchronous spike volleys. Depending on the input configuration, multiple groups of correlated synaptic inputs exhibit either winner-share-all or winner-take-all behavior. When the configuration of input correlations changes, individual synapses quickly and robustly readapt to represent the new configuration. We also demonstrate the advantages of log-STDP for generating a stable structure of strong weights in a recurrently connected network. These properties of log-STDP are compared with those of previous models. Through long-tail weight distributions, log-STDP achieves both stable dynamics for and robust competition of synapses, which are crucial for spike-based information processing.

  7. Getting the Gist of Events: Recognition of Two-Participant Actions from Brief Displays

    PubMed Central

    Hafri, Alon; Papafragou, Anna; Trueswell, John C.

    2013-01-01

    Unlike rapid scene and object recognition from brief displays, little is known about recognition of event categories and event roles from minimal visual information. In three experiments, we displayed naturalistic photographs of a wide range of two-participant event scenes for 37 ms and 73 ms followed by a mask, and found that event categories (the event gist, e.g., ‘kicking’, ‘pushing’, etc.) and event roles (i.e., Agent and Patient) can be recognized rapidly, even with various actor pairs and backgrounds. Norming ratings from a subsequent experiment revealed that certain physical features (e.g., outstretched extremities) that correlate with Agent-hood could have contributed to rapid role recognition. In a final experiment, using identical twin actors, we then varied these features in two sets of stimuli, in which Patients had Agent-like features or not. Subjects recognized the roles of event participants less accurately when Patients possessed Agent-like features, with this difference being eliminated with two-second durations. Thus, given minimal visual input, typical Agent-like physical features are used in role recognition but, with sufficient input from multiple fixations, people categorically determine the relationship between event participants. PMID:22984951

  8. A Higher-Order Neural Network Design for Improving Segmentation Performance in Medical Image Series

    NASA Astrophysics Data System (ADS)

    Selvi, Eşref; Selver, M. Alper; Güzeliş, Cüneyt; Dicle, Oǧuz

    2014-03-01

    Segmentation of anatomical structures from medical image series is an ongoing field of research. Although, organs of interest are three-dimensional in nature, slice-by-slice approaches are widely used in clinical applications because of their ease of integration with the current manual segmentation scheme. To be able to use slice-by-slice techniques effectively, adjacent slice information, which represents likelihood of a region to be the structure of interest, plays critical role. Recent studies focus on using distance transform directly as a feature or to increase the feature values at the vicinity of the search area. This study presents a novel approach by constructing a higher order neural network, the input layer of which receives features together with their multiplications with the distance transform. This allows higher-order interactions between features through the non-linearity introduced by the multiplication. The application of the proposed method to 9 CT datasets for segmentation of the liver shows higher performance than well-known higher order classification neural networks.

  9. Construction of In-house Databases in a Corporation

    NASA Astrophysics Data System (ADS)

    Sano, Hikomaro

    This report outlines “Repoir” (Report information retrieval) system of Toyota Central R & D Laboratories, Inc. as an example of in-house information retrieval system. The online system was designed to process in-house technical reports with the aid of a mainframe computer and has been in operation since 1979. Its features are multiple use of the information for technical and managerial purposes and simplicity in indexing and data input. The total number of descriptors, specially selected for the system, was minimized for ease of indexing. The report also describes the input items, processing flow and typical outputs in kanji letters.

  10. Integrative Data Analysis of Multi-Platform Cancer Data with a Multimodal Deep Learning Approach.

    PubMed

    Liang, Muxuan; Li, Zhizhong; Chen, Ting; Zeng, Jianyang

    2015-01-01

    Identification of cancer subtypes plays an important role in revealing useful insights into disease pathogenesis and advancing personalized therapy. The recent development of high-throughput sequencing technologies has enabled the rapid collection of multi-platform genomic data (e.g., gene expression, miRNA expression, and DNA methylation) for the same set of tumor samples. Although numerous integrative clustering approaches have been developed to analyze cancer data, few of them are particularly designed to exploit both deep intrinsic statistical properties of each input modality and complex cross-modality correlations among multi-platform input data. In this paper, we propose a new machine learning model, called multimodal deep belief network (DBN), to cluster cancer patients from multi-platform observation data. In our integrative clustering framework, relationships among inherent features of each single modality are first encoded into multiple layers of hidden variables, and then a joint latent model is employed to fuse common features derived from multiple input modalities. A practical learning algorithm, called contrastive divergence (CD), is applied to infer the parameters of our multimodal DBN model in an unsupervised manner. Tests on two available cancer datasets show that our integrative data analysis approach can effectively extract a unified representation of latent features to capture both intra- and cross-modality correlations, and identify meaningful disease subtypes from multi-platform cancer data. In addition, our approach can identify key genes and miRNAs that may play distinct roles in the pathogenesis of different cancer subtypes. Among those key miRNAs, we found that the expression level of miR-29a is highly correlated with survival time in ovarian cancer patients. These results indicate that our multimodal DBN based data analysis approach may have practical applications in cancer pathogenesis studies and provide useful guidelines for personalized cancer therapy.

  11. Full-order optimal compensators for flow control: the multiple inputs case

    NASA Astrophysics Data System (ADS)

    Semeraro, Onofrio; Pralits, Jan O.

    2018-03-01

    Flow control has been the subject of numerous experimental and theoretical works. We analyze full-order, optimal controllers for large dynamical systems in the presence of multiple actuators and sensors. The full-order controllers do not require any preliminary model reduction or low-order approximation: this feature allows us to assess the optimal performance of an actuated flow without relying on any estimation process or further hypothesis on the disturbances. We start from the original technique proposed by Bewley et al. (Meccanica 51(12):2997-3014, 2016. https://doi.org/10.1007/s11012-016-0547-3), the adjoint of the direct-adjoint (ADA) algorithm. The algorithm is iterative and allows bypassing the solution of the algebraic Riccati equation associated with the optimal control problem, typically infeasible for large systems. In this numerical work, we extend the ADA iteration into a more general framework that includes the design of controllers with multiple, coupled inputs and robust controllers (H_{∞} methods). First, we demonstrate our results by showing the analytical equivalence between the full Riccati solutions and the ADA approximations in the multiple inputs case. In the second part of the article, we analyze the performance of the algorithm in terms of convergence of the solution, by comparing it with analogous techniques. We find an excellent scalability with the number of inputs (actuators), making the method a viable way for full-order control design in complex settings. Finally, the applicability of the algorithm to fluid mechanics problems is shown using the linearized Kuramoto-Sivashinsky equation and the Kármán vortex street past a two-dimensional cylinder.

  12. 3D fluid-structure modelling and vibration analysis for fault diagnosis of Francis turbine using multiple ANN and multiple ANFIS

    NASA Astrophysics Data System (ADS)

    Saeed, R. A.; Galybin, A. N.; Popov, V.

    2013-01-01

    This paper discusses condition monitoring and fault diagnosis in Francis turbine based on integration of numerical modelling with several different artificial intelligence (AI) techniques. In this study, a numerical approach for fluid-structure (turbine runner) analysis is presented. The results of numerical analysis provide frequency response functions (FRFs) data sets along x-, y- and z-directions under different operating load and different position and size of faults in the structure. To extract features and reduce the dimensionality of the obtained FRF data, the principal component analysis (PCA) has been applied. Subsequently, the extracted features are formulated and fed into multiple artificial neural networks (ANN) and multiple adaptive neuro-fuzzy inference systems (ANFIS) in order to identify the size and position of the damage in the runner and estimate the turbine operating conditions. The results demonstrated the effectiveness of this approach and provide satisfactory accuracy even when the input data are corrupted with certain level of noise.

  13. Constructing a Database from Multiple 2D Images for Camera Pose Estimation and Robot Localization

    NASA Technical Reports Server (NTRS)

    Wolf, Michael; Ansar, Adnan I.; Brennan, Shane; Clouse, Daniel S.; Padgett, Curtis W.

    2012-01-01

    The LMDB (Landmark Database) Builder software identifies persistent image features (landmarks) in a scene viewed multiple times and precisely estimates the landmarks 3D world positions. The software receives as input multiple 2D images of approximately the same scene, along with an initial guess of the camera poses for each image, and a table of features matched pair-wise in each frame. LMDB Builder aggregates landmarks across an arbitrarily large collection of frames with matched features. Range data from stereo vision processing can also be passed to improve the initial guess of the 3D point estimates. The LMDB Builder aggregates feature lists across all frames, manages the process to promote selected features to landmarks, and iteratively calculates the 3D landmark positions using the current camera pose estimations (via an optimal ray projection method), and then improves the camera pose estimates using the 3D landmark positions. Finally, it extracts image patches for each landmark from auto-selected key frames and constructs the landmark database. The landmark database can then be used to estimate future camera poses (and therefore localize a robotic vehicle that may be carrying the cameras) by matching current imagery to landmark database image patches and using the known 3D landmark positions to estimate the current pose.

  14. Multiple transient memories in sheared suspensions: Robustness, structure, and routes to plasticity

    NASA Astrophysics Data System (ADS)

    Keim, Nathan C.; Paulsen, Joseph D.; Nagel, Sidney R.

    2013-09-01

    Multiple transient memories, originally discovered in charge-density-wave conductors, are a remarkable and initially counterintuitive example of how a system can store information about its driving. In this class of memories, a system can learn multiple driving inputs, nearly all of which are eventually forgotten despite their continual input. If sufficient noise is present, the system regains plasticity so that it can continue to learn new memories indefinitely. Recently, Keim and Nagel [Phys. Rev. Lett.PRLTAO0031-900710.1103/PhysRevLett.107.010603 107, 010603 (2011)] showed how multiple transient memories could be generalized to a generic driven disordered system with noise, giving as an example simulations of a simple model of a sheared non-Brownian suspension. Here, we further explore simulation models of suspensions under cyclic shear, focusing on three main themes: robustness, structure, and overdriving. We show that multiple transient memories are a robust feature independent of many details of the model. The steady-state spatial distribution of the particles is sensitive to the driving algorithm; nonetheless, the memory formation is independent of such a change in particle correlations. Finally, we demonstrate that overdriving provides another means for controlling memory formation and retention.

  15. POLO2: a user's guide to multiple Probit Or LOgit analysis

    Treesearch

    Robert M. Russell; N. E. Savin; Jacqueline L. Robertson

    1981-01-01

    This guide provides instructions for the use of POLO2, a computer program for multivariate probit or logic analysis of quantal response data. As many as 3000 test subjects may be included in a single analysis. Including the constant term, up to nine explanatory variables may be used. Examples illustrating input, output, and uses of the program's special features...

  16. Poisson process stimulation of an excitable membrane cable model.

    PubMed Central

    Goldfinger, M D

    1986-01-01

    The convergence of multiple inputs within a single-neuronal substrate is a common design feature of both peripheral and central nervous systems. Typically, the result of such convergence impinges upon an intracellularly contiguous axon, where it is encoded into a train of action potentials. The simplest representation of the result of convergence of multiple inputs is a Poisson process; a general representation of axonal excitability is the Hodgkin-Huxley/cable theory formalism. The present work addressed multiple input convergence upon an axon by applying Poisson process stimulation to the Hodgkin-Huxley axonal cable. The results showed that both absolute and relative refractory periods yielded in the axonal output a random but non-Poisson process. While smaller amplitude stimuli elicited a type of short-interval conditioning, larger amplitude stimuli elicited impulse trains approaching Poisson criteria except for the effects of refractoriness. These results were obtained for stimulus trains consisting of pulses of constant amplitude and constant or variable durations. By contrast, with or without stimulus pulse shape variability, the post-impulse conditional probability for impulse initiation in the steady-state was a Poisson-like process. For stimulus variability consisting of randomly smaller amplitudes or randomly longer durations, mean impulse frequency was attenuated or potentiated, respectively. Limitations and implications of these computations are discussed. PMID:3730505

  17. Feature Interactions Enable Decoding of Sensorimotor Transformations for Goal-Directed Movement

    PubMed Central

    Barany, Deborah A.; Della-Maggiore, Valeria; Viswanathan, Shivakumar; Cieslak, Matthew

    2014-01-01

    Neurophysiology and neuroimaging evidence shows that the brain represents multiple environmental and body-related features to compute transformations from sensory input to motor output. However, it is unclear how these features interact during goal-directed movement. To investigate this issue, we examined the representations of sensory and motor features of human hand movements within the left-hemisphere motor network. In a rapid event-related fMRI design, we measured cortical activity as participants performed right-handed movements at the wrist, with either of two postures and two amplitudes, to move a cursor to targets at different locations. Using a multivoxel analysis technique with rigorous generalization tests, we reliably distinguished representations of task-related features (primarily target location, movement direction, and posture) in multiple regions. In particular, we identified an interaction between target location and movement direction in the superior parietal lobule, which may underlie a transformation from the location of the target in space to a movement vector. In addition, we found an influence of posture on primary motor, premotor, and parietal regions. Together, these results reveal the complex interactions between different sensory and motor features that drive the computation of sensorimotor transformations. PMID:24828640

  18. Multitasking: Effects of processing multiple auditory feature patterns

    PubMed Central

    Miller, Tova; Chen, Sufen; Lee, Wei Wei; Sussman, Elyse S.

    2016-01-01

    ERPs and behavioral responses were measured to assess how task-irrelevant sounds interact with task processing demands and affect the ability to monitor and track multiple sound events. Participants listened to four-tone sequential frequency patterns, and responded to frequency pattern deviants (reversals of the pattern). Irrelevant tone feature patterns (duration and intensity) and respective pattern deviants were presented together with frequency patterns and frequency pattern deviants in separate conditions. Responses to task-relevant and task-irrelevant feature pattern deviants were used to test processing demands for irrelevant sound input. Behavioral performance was significantly better when there were no distracting feature patterns. Errors primarily occurred in response to the to-be-ignored feature pattern deviants. Task-irrelevant elicitation of ERP components was consistent with the error analysis, indicating a level of processing for the irrelevant features. Task-relevant elicitation of ERP components was consistent with behavioral performance, demonstrating a “cost” of performance when there were two feature patterns presented simultaneously. These results provide evidence that the brain tracked the irrelevant duration and intensity feature patterns, affecting behavioral performance. Overall, our results demonstrate that irrelevant informational streams are processed at a cost, which may be considered a type of multitasking that is an ongoing, automatic processing of taskirrelevant sensory events. PMID:25939456

  19. Assessing mental stress from the photoplethysmogram: a numerical study

    PubMed Central

    Charlton, Peter H; Celka, Patrick; Farukh, Bushra; Chowienczyk, Phil; Alastruey, Jordi

    2018-01-01

    Abstract Objective: Mental stress is detrimental to cardiovascular health, being a risk factor for coronary heart disease and a trigger for cardiac events. However, it is not currently routinely assessed. The aim of this study was to identify features of the photoplethysmogram (PPG) pulse wave which are indicative of mental stress. Approach: A numerical model of pulse wave propagation was used to simulate blood pressure signals, from which simulated PPG pulse waves were estimated using a transfer function. Pulse waves were simulated at six levels of stress by changing the model input parameters both simultaneously and individually, in accordance with haemodynamic changes associated with stress. Thirty-two feature measurements were extracted from pulse waves at three measurement sites: the brachial, radial and temporal arteries. Features which changed significantly with stress were identified using the Mann–Kendall monotonic trend test. Main results: Seventeen features exhibited significant trends with stress in measurements from at least one site. Three features showed significant trends at all three sites: the time from pulse onset to peak, the time from the dicrotic notch to pulse end, and the pulse rate. More features showed significant trends at the radial artery (15) than the brachial (8) or temporal (7) arteries. Most features were influenced by multiple input parameters. Significance: The features identified in this study could be used to monitor stress in healthcare and consumer devices. Measurements at the radial artery may provide superior performance than the brachial or temporal arteries. In vivo studies are required to confirm these observations. PMID:29658894

  20. A comparison of supervised classification methods for the prediction of substrate type using multibeam acoustic and legacy grain-size data.

    PubMed

    Stephens, David; Diesing, Markus

    2014-01-01

    Detailed seabed substrate maps are increasingly in demand for effective planning and management of marine ecosystems and resources. It has become common to use remotely sensed multibeam echosounder data in the form of bathymetry and acoustic backscatter in conjunction with ground-truth sampling data to inform the mapping of seabed substrates. Whilst, until recently, such data sets have typically been classified by expert interpretation, it is now obvious that more objective, faster and repeatable methods of seabed classification are required. This study compares the performances of a range of supervised classification techniques for predicting substrate type from multibeam echosounder data. The study area is located in the North Sea, off the north-east coast of England. A total of 258 ground-truth samples were classified into four substrate classes. Multibeam bathymetry and backscatter data, and a range of secondary features derived from these datasets were used in this study. Six supervised classification techniques were tested: Classification Trees, Support Vector Machines, k-Nearest Neighbour, Neural Networks, Random Forest and Naive Bayes. Each classifier was trained multiple times using different input features, including i) the two primary features of bathymetry and backscatter, ii) a subset of the features chosen by a feature selection process and iii) all of the input features. The predictive performances of the models were validated using a separate test set of ground-truth samples. The statistical significance of model performances relative to a simple baseline model (Nearest Neighbour predictions on bathymetry and backscatter) were tested to assess the benefits of using more sophisticated approaches. The best performing models were tree based methods and Naive Bayes which achieved accuracies of around 0.8 and kappa coefficients of up to 0.5 on the test set. The models that used all input features didn't generally perform well, highlighting the need for some means of feature selection.

  1. Establishing a learning foundation in a dynamically changing world: Insights from artificial language work

    NASA Astrophysics Data System (ADS)

    Gonzales, Kalim

    It is argued that infants build a foundation for learning about the world through their incidental acquisition of the spatial and temporal regularities surrounding them. A challenge is that learning occurs across multiple contexts whose statistics can greatly differ. Two artificial language studies with 12-month-olds demonstrate that infants come prepared to parse statistics across contexts using the temporal and perceptual features that distinguish one context from another. These results suggest that infants can organize their statistical input with a wider range of features that typically considered. Possible attention, decision making, and memory mechanisms are discussed.

  2. Prediction of rat protein subcellular localization with pseudo amino acid composition based on multiple sequential features.

    PubMed

    Shi, Ruijia; Xu, Cunshuan

    2011-06-01

    The study of rat proteins is an indispensable task in experimental medicine and drug development. The function of a rat protein is closely related to its subcellular location. Based on the above concept, we construct the benchmark rat proteins dataset and develop a combined approach for predicting the subcellular localization of rat proteins. From protein primary sequence, the multiple sequential features are obtained by using of discrete Fourier analysis, position conservation scoring function and increment of diversity, and these sequential features are selected as input parameters of the support vector machine. By the jackknife test, the overall success rate of prediction is 95.6% on the rat proteins dataset. Our method are performed on the apoptosis proteins dataset and the Gram-negative bacterial proteins dataset with the jackknife test, the overall success rates are 89.9% and 96.4%, respectively. The above results indicate that our proposed method is quite promising and may play a complementary role to the existing predictors in this area.

  3. A judicious multiple hypothesis tracker with interacting feature extraction

    NASA Astrophysics Data System (ADS)

    McAnanama, James G.; Kirubarajan, T.

    2009-05-01

    The multiple hypotheses tracker (mht) is recognized as an optimal tracking method due to the enumeration of all possible measurement-to-track associations, which does not involve any approximation in its original formulation. However, its practical implementation is limited by the NP-hard nature of this enumeration. As a result, a number of maintenance techniques such as pruning and merging have been proposed to bound the computational complexity. It is possible to improve the performance of a tracker, mht or not, using feature information (e.g., signal strength, size, type) in addition to kinematic data. However, in most tracking systems, the extraction of features from the raw sensor data is typically independent of the subsequent association and filtering stages. In this paper, a new approach, called the Judicious Multi Hypotheses Tracker (jmht), whereby there is an interaction between feature extraction and the mht, is presented. The measure of the quality of feature extraction is input into measurement-to-track association while the prediction step feeds back the parameters to be used in the next round of feature extraction. The motivation for this forward and backward interaction between feature extraction and tracking is to improve the performance in both steps. This approach allows for a more rational partitioning of the feature space and removes unlikely features from the assignment problem. Simulation results demonstrate the benefits of the proposed approach.

  4. Effective seat-to-head transmissibility in whole-body vibration: Effects of posture and arm position

    NASA Astrophysics Data System (ADS)

    Rahmatalla, Salam; DeShaw, Jonathan

    2011-12-01

    Seat-to-head transmissibility is a biomechanical measure that has been widely used for many decades to evaluate seat dynamics and human response to vibration. Traditionally, transmissibility has been used to correlate single-input or multiple-input with single-output motion; it has not been effectively used for multiple-input and multiple-output scenarios due to the complexity of dealing with the coupled motions caused by the cross-axis effect. This work presents a novel approach to use transmissibility effectively for single- and multiple-input and multiple-output whole-body vibrations. In this regard, the full transmissibility matrix is transformed into a single graph, such as those for single-input and single-output motions. Singular value decomposition and maximum distortion energy theory were used to achieve the latter goal. Seat-to-head transmissibility matrices for single-input/multiple-output in the fore-aft direction, single-input/multiple-output in the vertical direction, and multiple-input/multiple-output directions are investigated in this work. A total of ten subjects participated in this study. Discrete frequencies of 0.5-16 Hz were used for the fore-aft direction using supported and unsupported back postures. Random ride files from a dozer machine were used for the vertical and multiple-axis scenarios considering two arm postures: using the armrests or grasping the steering wheel. For single-input/multiple-output, the results showed that the proposed method was very effective in showing the frequencies where the transmissibility is mostly sensitive for the two sitting postures and two arm positions. For multiple-input/multiple-output, the results showed that the proposed effective transmissibility indicated higher values for the armrest-supported posture than for the steering-wheel-supported posture.

  5. Quantum theory of multiple-input-multiple-output Markovian feedback with diffusive measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chia, A.; Wiseman, H. M.

    2011-07-15

    Feedback control engineers have been interested in multiple-input-multiple-output (MIMO) extensions of single-input-single-output (SISO) results of various kinds due to its rich mathematical structure and practical applications. An outstanding problem in quantum feedback control is the extension of the SISO theory of Markovian feedback by Wiseman and Milburn [Phys. Rev. Lett. 70, 548 (1993)] to multiple inputs and multiple outputs. Here we generalize the SISO homodyne-mediated feedback theory to allow for multiple inputs, multiple outputs, and arbitrary diffusive quantum measurements. We thus obtain a MIMO framework which resembles the SISO theory and whose additional mathematical structure is highlighted by the extensivemore » use of vector-operator algebra.« less

  6. Spatial-area selective retrieval of multiple object-place associations in a hierarchical cognitive map formed by theta phase coding.

    PubMed

    Sato, Naoyuki; Yamaguchi, Yoko

    2009-06-01

    The human cognitive map is known to be hierarchically organized consisting of a set of perceptually clustered landmarks. Patient studies have demonstrated that these cognitive maps are maintained by the hippocampus, while the neural dynamics are still poorly understood. The authors have shown that the neural dynamic "theta phase precession" observed in the rodent hippocampus may be capable of forming hierarchical cognitive maps in humans. In the model, a visual input sequence consisting of object and scene features in the central and peripheral visual fields, respectively, results in the formation of a hierarchical cognitive map for object-place associations. Surprisingly, it is possible for such a complex memory structure to be formed in a few seconds. In this paper, we evaluate the memory retrieval of object-place associations in the hierarchical network formed by theta phase precession. The results show that multiple object-place associations can be retrieved with the initial cue of a scene input. Importantly, according to the wide-to-narrow unidirectional connections among scene units, the spatial area for object-place retrieval can be controlled by the spatial area of the initial cue input. These results indicate that the hierarchical cognitive maps have computational advantages on a spatial-area selective retrieval of multiple object-place associations. Theta phase precession dynamics is suggested as a fundamental neural mechanism of the human cognitive map.

  7. A support vector machine approach for classification of welding defects from ultrasonic signals

    NASA Astrophysics Data System (ADS)

    Chen, Yuan; Ma, Hong-Wei; Zhang, Guang-Ming

    2014-07-01

    Defect classification is an important issue in ultrasonic non-destructive evaluation. A layered multi-class support vector machine (LMSVM) classification system, which combines multiple SVM classifiers through a layered architecture, is proposed in this paper. The proposed LMSVM classification system is applied to the classification of welding defects from ultrasonic test signals. The measured ultrasonic defect echo signals are first decomposed into wavelet coefficients by the wavelet packet transform. The energy of the wavelet coefficients at different frequency channels are used to construct the feature vectors. The bees algorithm (BA) is then used for feature selection and SVM parameter optimisation for the LMSVM classification system. The BA-based feature selection optimises the energy feature vectors. The optimised feature vectors are input to the LMSVM classification system for training and testing. Experimental results of classifying welding defects demonstrate that the proposed technique is highly robust, precise and reliable for ultrasonic defect classification.

  8. Object activation in semantic memory from visual multimodal feature input.

    PubMed

    Kraut, Michael A; Kremen, Sarah; Moo, Lauren R; Segal, Jessica B; Calhoun, Vincent; Hart, John

    2002-01-01

    The human brain's representation of objects has been proposed to exist as a network of coactivated neural regions present in multiple cognitive systems. However, it is not known if there is a region specific to the process of activating an integrated object representation in semantic memory from multimodal feature stimuli (e.g., picture-word). A previous study using word-word feature pairs as stimulus input showed that the left thalamus is integrally involved in object activation (Kraut, Kremen, Segal, et al., this issue). In the present study, participants were presented picture-word pairs that are features of objects, with the task being to decide if together they "activated" an object not explicitly presented (e.g., picture of a candle and the word "icing" activate the internal representation of a "cake"). For picture-word pairs that combine to elicit an object, signal change was detected in the ventral temporo-occipital regions, pre-SMA, left primary somatomotor cortex, both caudate nuclei, and the dorsal thalami bilaterally. These findings suggest that the left thalamus is engaged for either picture or word stimuli, but the right thalamus appears to be involved when picture stimuli are also presented with words in semantic object activation tasks. The somatomotor signal changes are likely secondary to activation of the semantic object representations from multimodal visual stimuli.

  9. Application of lifting wavelet and random forest in compound fault diagnosis of gearbox

    NASA Astrophysics Data System (ADS)

    Chen, Tang; Cui, Yulian; Feng, Fuzhou; Wu, Chunzhi

    2018-03-01

    Aiming at the weakness of compound fault characteristic signals of a gearbox of an armored vehicle and difficult to identify fault types, a fault diagnosis method based on lifting wavelet and random forest is proposed. First of all, this method uses the lifting wavelet transform to decompose the original vibration signal in multi-layers, reconstructs the multi-layer low-frequency and high-frequency components obtained by the decomposition to get multiple component signals. Then the time-domain feature parameters are obtained for each component signal to form multiple feature vectors, which is input into the random forest pattern recognition classifier to determine the compound fault type. Finally, a variety of compound fault data of the gearbox fault analog test platform are verified, the results show that the recognition accuracy of the fault diagnosis method combined with the lifting wavelet and the random forest is up to 99.99%.

  10. SIFT optimization and automation for matching images from multiple temporal sources

    NASA Astrophysics Data System (ADS)

    Castillo-Carrión, Sebastián; Guerrero-Ginel, José-Emilio

    2017-05-01

    Scale Invariant Feature Transformation (SIFT) was applied to extract tie-points from multiple source images. Although SIFT is reported to perform reliably under widely different radiometric and geometric conditions, using the default input parameters resulted in too few points being found. We found that the best solution was to focus on large features as these are more robust and not prone to scene changes over time, which constitutes a first approach to the automation of processes using mapping applications such as geometric correction, creation of orthophotos and 3D models generation. The optimization of five key SIFT parameters is proposed as a way of increasing the number of correct matches; the performance of SIFT is explored in different images and parameter values, finding optimization values which are corroborated using different validation imagery. The results show that the optimization model improves the performance of SIFT in correlating multitemporal images captured from different sources.

  11. Gateway design specification for fiber optic local area networks

    NASA Technical Reports Server (NTRS)

    1985-01-01

    This is a Design Specification for a gateway to interconnect fiber optic local area networks (LAN's). The internetworking protocols for a gateway device that will interconnect multiple local area networks are defined. This specification serves as input for preparation of detailed design specifications for the hardware and software of a gateway device. General characteristics to be incorporated in the gateway such as node address mapping, packet fragmentation, and gateway routing features are described.

  12. SINDA/FLUINT Stratified Tank Modeling for Cryrogenic Propellant Tanks

    NASA Technical Reports Server (NTRS)

    Sakowski, Barbara

    2014-01-01

    A general purpose SINDA/FLUINT (S/F) stratified tank model was created to simulate self-pressurization and axial jet TVS; Stratified layers in the vapor and liquid are modeled using S/F lumps.; The stratified tank model was constructed to permit incorporating the following additional features:, Multiple or singular lumps in the liquid and vapor regions of the tank, Real gases (also mixtures) and compressible liquids, Venting, pressurizing, and draining, Condensation and evaporation/boiling, Wall heat transfer, Elliptical, cylindrical, and spherical tank geometries; Extensive user logic is used to allow detailed tailoring - Don't have to rebuilt everything from scratch!!; Most code input for a specific case is done through the Registers Data Block:, Lump volumes are determined through user input:; Geometric tank dimensions (height, width, etc); Liquid level could be input as either a volume percentage of fill level or actual liquid level height

  13. Logic models to predict continuous outputs based on binary inputs with an application to personalized cancer therapy

    PubMed Central

    Knijnenburg, Theo A.; Klau, Gunnar W.; Iorio, Francesco; Garnett, Mathew J.; McDermott, Ultan; Shmulevich, Ilya; Wessels, Lodewyk F. A.

    2016-01-01

    Mining large datasets using machine learning approaches often leads to models that are hard to interpret and not amenable to the generation of hypotheses that can be experimentally tested. We present ‘Logic Optimization for Binary Input to Continuous Output’ (LOBICO), a computational approach that infers small and easily interpretable logic models of binary input features that explain a continuous output variable. Applying LOBICO to a large cancer cell line panel, we find that logic combinations of multiple mutations are more predictive of drug response than single gene predictors. Importantly, we show that the use of the continuous information leads to robust and more accurate logic models. LOBICO implements the ability to uncover logic models around predefined operating points in terms of sensitivity and specificity. As such, it represents an important step towards practical application of interpretable logic models. PMID:27876821

  14. Logic models to predict continuous outputs based on binary inputs with an application to personalized cancer therapy.

    PubMed

    Knijnenburg, Theo A; Klau, Gunnar W; Iorio, Francesco; Garnett, Mathew J; McDermott, Ultan; Shmulevich, Ilya; Wessels, Lodewyk F A

    2016-11-23

    Mining large datasets using machine learning approaches often leads to models that are hard to interpret and not amenable to the generation of hypotheses that can be experimentally tested. We present 'Logic Optimization for Binary Input to Continuous Output' (LOBICO), a computational approach that infers small and easily interpretable logic models of binary input features that explain a continuous output variable. Applying LOBICO to a large cancer cell line panel, we find that logic combinations of multiple mutations are more predictive of drug response than single gene predictors. Importantly, we show that the use of the continuous information leads to robust and more accurate logic models. LOBICO implements the ability to uncover logic models around predefined operating points in terms of sensitivity and specificity. As such, it represents an important step towards practical application of interpretable logic models.

  15. Combination of High-density Microelectrode Array and Patch Clamp Recordings to Enable Studies of Multisynaptic Integration.

    PubMed

    Jäckel, David; Bakkum, Douglas J; Russell, Thomas L; Müller, Jan; Radivojevic, Milos; Frey, Urs; Franke, Felix; Hierlemann, Andreas

    2017-04-20

    We present a novel, all-electric approach to record and to precisely control the activity of tens of individual presynaptic neurons. The method allows for parallel mapping of the efficacy of multiple synapses and of the resulting dynamics of postsynaptic neurons in a cortical culture. For the measurements, we combine an extracellular high-density microelectrode array, featuring 11'000 electrodes for extracellular recording and stimulation, with intracellular patch-clamp recording. We are able to identify the contributions of individual presynaptic neurons - including inhibitory and excitatory synaptic inputs - to postsynaptic potentials, which enables us to study dendritic integration. Since the electrical stimuli can be controlled at microsecond resolution, our method enables to evoke action potentials at tens of presynaptic cells in precisely orchestrated sequences of high reliability and minimum jitter. We demonstrate the potential of this method by evoking short- and long-term synaptic plasticity through manipulation of multiple synaptic inputs to a specific neuron.

  16. Using features of a Creole language to reconstruct population history and cultural evolution: tracing the English origins of Sranan.

    PubMed

    Sherriah, André C; Devonish, Hubert; Thomas, Ewart A C; Creanza, Nicole

    2018-04-05

    Creole languages are formed in conditions where speakers from distinct languages are brought together without a shared first language, typically under the domination of speakers from one of the languages and particularly in the context of the transatlantic slave trade and European colonialism. One such Creole in Suriname, Sranan, developed around the mid-seventeenth century, primarily out of contact between varieties of English from England, spoken by the dominant group, and multiple West African languages. The vast majority of the basic words in Sranan come from the language of the dominant group, English. Here, we compare linguistic features of modern-day Sranan with those of English as spoken in 313 localities across England. By way of testing proposed hypotheses for the origin of English words in Sranan, we find that 80% of the studied features of Sranan can be explained by similarity to regional dialect features at two distinct input locations within England, a cluster of locations near the port of Bristol and another cluster near Essex in eastern England. Our new hypothesis is supported by the geographical distribution of specific regional dialect features, such as post-vocalic rhoticity and word-initial 'h', and by phylogenetic analysis of these features, which shows evidence favouring input from at least two English dialects in the formation of Sranan. In addition to explicating the dialect features most prominent in the linguistic evolution of Sranan, our historical analyses also provide supporting evidence for two distinct hypotheses about the likely geographical origins of the English speakers whose language was an input to Sranan. The emergence as a likely input to Sranan of the speech forms of a cluster near Bristol is consistent with historical records, indicating that most of the indentured servants going to the Americas between 1654 and 1666 were from Bristol and nearby counties, and that of the cluster near Essex is consistent with documents showing that many of the governors and important planters came from the southeast of England (including London) (Smith 1987 The Genesis of the Creole Languages of Surinam ; Smith 2009 In The handbook of pidgin and creole studies , pp. 98-129).This article is part of the theme issue 'Bridging cultural gaps: interdisciplinary studies in human cultural evolution'. © 2018 The Author(s).

  17. Evolution of late stage differentiates in the Palisades Sill, New York and New Jersey

    NASA Astrophysics Data System (ADS)

    Block, Karin A.; Steiner, Jeffrey C.; Puffer, John H.; Jones, Kevin M.; Goldstein, Steven L.

    2015-08-01

    The Palisades Sill at Upper Nyack, NY contains evolved rocks that crystallized as ferrodiabase and ferrogranophyre and occupy 50% to 60% of the local thickness. 143Nd/144Nd isotope values for rocks representing Palisades diversity range between 0.512320 and 0.512331, and indicate a homogeneous source for the Palisades and little or no contamination from shallow crustal sediments. Petrographic analysis of ferrodiabase suggests that strong iron enrichment was the result of prolonged quiescence in cycles of magmatic input. Ferrogranophyres in the updip northern Palisades at Upper Nyack are members of a suite of cogenetic rocks with similar composition to 'sandwich horizon' rocks of the southern Palisades at Fort Lee, NJ, but display distinct mineralogical and textural features. Differences in textural and mineralogical features are attributed to a) updip (lateral) migration of residual liquid as the sill propagated closer to the surface; b) deformation caused by tectonic shifts; and c) crystallization in the presence of deuteric hydrothermal fluids resulting in varying degrees of alteration. A model connecting multiple magmatic pulses, compaction and mobilization of residual liquid by compositional convection, closed-system differentiation, synchronous with tapping of the sill for extrusion of coeval basaltic subaerial flows is presented. The persistence of a low-temperature mushy layer, represented by ferrogranophyres, supports the possibility of a long-lived conduit subject to reopening after periods of quiescence in magmatic input, leading to the extrusion of the multiple flows of the Orange Mountain Basalt and perhaps even subsequent Preakness Basalt flows, depending on solidification conditions. A sub-Newark Basin network of sills subjected to similar protracted input of pulses as hypothesized for the Palisades was likely responsible for 600 ka of magmatic activity required to emplace a third set of Watchung flood basalts, the Hook Mountain Basalt.

  18. Support Vector Machine-Based Prediction of Local Tumor Control After Stereotactic Body Radiation Therapy for Early-Stage Non-Small Cell Lung Cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Klement, Rainer J., E-mail: rainer_klement@gmx.de; Department of Radiotherapy and Radiation Oncology, Leopoldina Hospital, Schweinfurt; Allgäuer, Michael

    2014-03-01

    Background: Several prognostic factors for local tumor control probability (TCP) after stereotactic body radiation therapy (SBRT) for early stage non-small cell lung cancer (NSCLC) have been described, but no attempts have been undertaken to explore whether a nonlinear combination of potential factors might synergistically improve the prediction of local control. Methods and Materials: We investigated a support vector machine (SVM) for predicting TCP in a cohort of 399 patients treated at 13 German and Austrian institutions. Among 7 potential input features for the SVM we selected those most important on the basis of forward feature selection, thereby evaluating classifier performancemore » by using 10-fold cross-validation and computing the area under the ROC curve (AUC). The final SVM classifier was built by repeating the feature selection 10 times with different splitting of the data for cross-validation and finally choosing only those features that were selected at least 5 out of 10 times. It was compared with a multivariate logistic model that was built by forward feature selection. Results: Local failure occurred in 12% of patients. Biologically effective dose (BED) at the isocenter (BED{sub ISO}) was the strongest predictor of TCP in the logistic model and also the most frequently selected input feature for the SVM. A bivariate logistic function of BED{sub ISO} and the pulmonary function indicator forced expiratory volume in 1 second (FEV1) yielded the best description of the data but resulted in a significantly smaller AUC than the final SVM classifier with the input features BED{sub ISO}, age, baseline Karnofsky index, and FEV1 (0.696 ± 0.040 vs 0.789 ± 0.001, P<.03). The final SVM resulted in sensitivity and specificity of 67.0% ± 0.5% and 78.7% ± 0.3%, respectively. Conclusions: These results confirm that machine learning techniques like SVMs can be successfully applied to predict treatment outcome after SBRT. Improvements over traditional TCP modeling are expected through a nonlinear combination of multiple features, eventually helping in the task of personalized treatment planning.« less

  19. Support vector machine-based prediction of local tumor control after stereotactic body radiation therapy for early-stage non-small cell lung cancer.

    PubMed

    Klement, Rainer J; Allgäuer, Michael; Appold, Steffen; Dieckmann, Karin; Ernst, Iris; Ganswindt, Ute; Holy, Richard; Nestle, Ursula; Nevinny-Stickel, Meinhard; Semrau, Sabine; Sterzing, Florian; Wittig, Andrea; Andratschke, Nicolaus; Guckenberger, Matthias

    2014-03-01

    Several prognostic factors for local tumor control probability (TCP) after stereotactic body radiation therapy (SBRT) for early stage non-small cell lung cancer (NSCLC) have been described, but no attempts have been undertaken to explore whether a nonlinear combination of potential factors might synergistically improve the prediction of local control. We investigated a support vector machine (SVM) for predicting TCP in a cohort of 399 patients treated at 13 German and Austrian institutions. Among 7 potential input features for the SVM we selected those most important on the basis of forward feature selection, thereby evaluating classifier performance by using 10-fold cross-validation and computing the area under the ROC curve (AUC). The final SVM classifier was built by repeating the feature selection 10 times with different splitting of the data for cross-validation and finally choosing only those features that were selected at least 5 out of 10 times. It was compared with a multivariate logistic model that was built by forward feature selection. Local failure occurred in 12% of patients. Biologically effective dose (BED) at the isocenter (BED(ISO)) was the strongest predictor of TCP in the logistic model and also the most frequently selected input feature for the SVM. A bivariate logistic function of BED(ISO) and the pulmonary function indicator forced expiratory volume in 1 second (FEV1) yielded the best description of the data but resulted in a significantly smaller AUC than the final SVM classifier with the input features BED(ISO), age, baseline Karnofsky index, and FEV1 (0.696 ± 0.040 vs 0.789 ± 0.001, P<.03). The final SVM resulted in sensitivity and specificity of 67.0% ± 0.5% and 78.7% ± 0.3%, respectively. These results confirm that machine learning techniques like SVMs can be successfully applied to predict treatment outcome after SBRT. Improvements over traditional TCP modeling are expected through a nonlinear combination of multiple features, eventually helping in the task of personalized treatment planning. Copyright © 2014 Elsevier Inc. All rights reserved.

  20. A parametric multiclass Bayes error estimator for the multispectral scanner spatial model performance evaluation

    NASA Technical Reports Server (NTRS)

    Mobasseri, B. G.; Mcgillem, C. D.; Anuta, P. E. (Principal Investigator)

    1978-01-01

    The author has identified the following significant results. The probability of correct classification of various populations in data was defined as the primary performance index. The multispectral data being of multiclass nature as well, required a Bayes error estimation procedure that was dependent on a set of class statistics alone. The classification error was expressed in terms of an N dimensional integral, where N was the dimensionality of the feature space. The multispectral scanner spatial model was represented by a linear shift, invariant multiple, port system where the N spectral bands comprised the input processes. The scanner characteristic function, the relationship governing the transformation of the input spatial, and hence, spectral correlation matrices through the systems, was developed.

  1. Unsupervised Deep Learning Applied to Breast Density Segmentation and Mammographic Risk Scoring.

    PubMed

    Kallenberg, Michiel; Petersen, Kersten; Nielsen, Mads; Ng, Andrew Y; Pengfei Diao; Igel, Christian; Vachon, Celine M; Holland, Katharina; Winkel, Rikke Rass; Karssemeijer, Nico; Lillholm, Martin

    2016-05-01

    Mammographic risk scoring has commonly been automated by extracting a set of handcrafted features from mammograms, and relating the responses directly or indirectly to breast cancer risk. We present a method that learns a feature hierarchy from unlabeled data. When the learned features are used as the input to a simple classifier, two different tasks can be addressed: i) breast density segmentation, and ii) scoring of mammographic texture. The proposed model learns features at multiple scales. To control the models capacity a novel sparsity regularizer is introduced that incorporates both lifetime and population sparsity. We evaluated our method on three different clinical datasets. Our state-of-the-art results show that the learned breast density scores have a very strong positive relationship with manual ones, and that the learned texture scores are predictive of breast cancer. The model is easy to apply and generalizes to many other segmentation and scoring problems.

  2. Spectral-spatial classification of hyperspectral image using three-dimensional convolution network

    NASA Astrophysics Data System (ADS)

    Liu, Bing; Yu, Xuchu; Zhang, Pengqiang; Tan, Xiong; Wang, Ruirui; Zhi, Lu

    2018-01-01

    Recently, hyperspectral image (HSI) classification has become a focus of research. However, the complex structure of an HSI makes feature extraction difficult to achieve. Most current methods build classifiers based on complex handcrafted features computed from the raw inputs. The design of an improved 3-D convolutional neural network (3D-CNN) model for HSI classification is described. This model extracts features from both the spectral and spatial dimensions through the application of 3-D convolutions, thereby capturing the important discrimination information encoded in multiple adjacent bands. The designed model views the HSI cube data altogether without relying on any pre- or postprocessing. In addition, the model is trained in an end-to-end fashion without any handcrafted features. The designed model was applied to three widely used HSI datasets. The experimental results demonstrate that the 3D-CNN-based method outperforms conventional methods even with limited labeled training samples.

  3. Sampled-data synchronisation of coupled harmonic oscillators with communication and input delays subject to controller failure

    NASA Astrophysics Data System (ADS)

    Zhao, Liyun; Zhou, Jin; Wu, Quanjun

    2016-01-01

    This paper considers the sampled-data synchronisation problems of coupled harmonic oscillators with communication and input delays subject to controller failure. A synchronisation protocol is proposed for such oscillator systems over directed network topology, and then some general algebraic criteria on exponential convergence for the proposed protocol are established. The main features of the present investigation include: (1) both the communication and input delays are simultaneously addressed, and the directed network topology is firstly considered and (2) the effects of time delays on synchronisation performance are theoretically and numerically investigated. It is shown that in the absence of communication delays, coupled harmonic oscillators can achieve synchronisation oscillatory motion. Whereas if communication delays are nonzero at infinite multiple sampled-data instants, its synchronisation (or consensus) state is zero. This conclusion can be used as an effective control strategy to stabilise coupled harmonic oscillators in practical applications. Furthermore, it is interesting to find that increasing either communication or input delays will enhance the synchronisation performance of coupled harmonic oscillators. Subsequently, numerical examples illustrate and visualise theoretical results.

  4. [Research on spectra recognition method for cabbages and weeds based on PCA and SIMCA].

    PubMed

    Zu, Qin; Deng, Wei; Wang, Xiu; Zhao, Chun-Jiang

    2013-10-01

    In order to improve the accuracy and efficiency of weed identification, the difference of spectral reflectance was employed to distinguish between crops and weeds. Firstly, the different combinations of Savitzky-Golay (SG) convolutional derivation and multiplicative scattering correction (MSC) method were applied to preprocess the raw spectral data. Then the clustering analysis of various types of plants was completed by using principal component analysis (PCA) method, and the feature wavelengths which were sensitive for classifying various types of plants were extracted according to the corresponding loading plots of the optimal principal components in PCA results. Finally, setting the feature wavelengths as the input variables, the soft independent modeling of class analogy (SIMCA) classification method was used to identify the various types of plants. The experimental results of classifying cabbages and weeds showed that on the basis of the optimal pretreatment by a synthetic application of MSC and SG convolutional derivation with SG's parameters set as 1rd order derivation, 3th degree polynomial and 51 smoothing points, 23 feature wavelengths were extracted in accordance with the top three principal components in PCA results. When SIMCA method was used for classification while the previously selected 23 feature wavelengths were set as the input variables, the classification rates of the modeling set and the prediction set were respectively up to 98.6% and 100%.

  5. MSClique: Multiple Structure Discovery through the Maximum Weighted Clique Problem.

    PubMed

    Sanroma, Gerard; Penate-Sanchez, Adrian; Alquézar, René; Serratosa, Francesc; Moreno-Noguer, Francesc; Andrade-Cetto, Juan; González Ballester, Miguel Ángel

    2016-01-01

    We present a novel approach for feature correspondence and multiple structure discovery in computer vision. In contrast to existing methods, we exploit the fact that point-sets on the same structure usually lie close to each other, thus forming clusters in the image. Given a pair of input images, we initially extract points of interest and extract hierarchical representations by agglomerative clustering. We use the maximum weighted clique problem to find the set of corresponding clusters with maximum number of inliers representing the multiple structures at the correct scales. Our method is parameter-free and only needs two sets of points along with their tentative correspondences, thus being extremely easy to use. We demonstrate the effectiveness of our method in multiple-structure fitting experiments in both publicly available and in-house datasets. As shown in the experiments, our approach finds a higher number of structures containing fewer outliers compared to state-of-the-art methods.

  6. Using input feature information to improve ultraviolet retrieval in neural networks

    NASA Astrophysics Data System (ADS)

    Sun, Zhibin; Chang, Ni-Bin; Gao, Wei; Chen, Maosi; Zempila, Melina

    2017-09-01

    In neural networks, the training/predicting accuracy and algorithm efficiency can be improved significantly via accurate input feature extraction. In this study, some spatial features of several important factors in retrieving surface ultraviolet (UV) are extracted. An extreme learning machine (ELM) is used to retrieve the surface UV of 2014 in the continental United States, using the extracted features. The results conclude that more input weights can improve the learning capacities of neural networks.

  7. Three-input majority logic gate and multiple input logic circuit based on DNA strand displacement.

    PubMed

    Li, Wei; Yang, Yang; Yan, Hao; Liu, Yan

    2013-06-12

    In biomolecular programming, the properties of biomolecules such as proteins and nucleic acids are harnessed for computational purposes. The field has gained considerable attention due to the possibility of exploiting the massive parallelism that is inherent in natural systems to solve computational problems. DNA has already been used to build complex molecular circuits, where the basic building blocks are logic gates that produce single outputs from one or more logical inputs. We designed and experimentally realized a three-input majority gate based on DNA strand displacement. One of the key features of a three-input majority gate is that the three inputs have equal priority, and the output will be true if any of the two inputs are true. Our design consists of a central, circular DNA strand with three unique domains between which are identical joint sequences. Before inputs are introduced to the system, each domain and half of each joint is protected by one complementary ssDNA that displays a toehold for subsequent displacement by the corresponding input. With this design the relationship between any two domains is analogous to the relationship between inputs in a majority gate. Displacing two or more of the protection strands will expose at least one complete joint and return a true output; displacing none or only one of the protection strands will not expose a complete joint and will return a false output. Further, we designed and realized a complex five-input logic gate based on the majority gate described here. By controlling two of the five inputs the complex gate can realize every combination of OR and AND gates of the other three inputs.

  8. Digital Beamforming Synthetic Aperture Radar Developments at NASA Goddard Space Flight Center

    NASA Technical Reports Server (NTRS)

    Rincon, Rafael; Fatoyinbo, Temilola; Osmanoglu, Batuhan; Lee, Seung Kuk; Du Toit, Cornelis F.; Perrine, Martin; Ranson, K. Jon; Sun, Guoqing; Deshpande, Manohar; Beck, Jaclyn; hide

    2016-01-01

    Advanced Digital Beamforming (DBF) Synthetic Aperture Radar (SAR) technology is an area of research and development pursued at the NASA Goddard Space Flight Center (GSFC). Advanced SAR architectures enhances radar performance and opens a new set of capabilities in radar remote sensing. DBSAR-2 and EcoSAR are two state-of-the-art radar systems recently developed and tested. These new instruments employ multiple input-multiple output (MIMO) architectures characterized by multi-mode operation, software defined waveform generation, digital beamforming, and configurable radar parameters. The instruments have been developed to support several disciplines in Earth and Planetary sciences. This paper describes the radars advanced features and report on the latest SAR processing and calibration efforts.

  9. Electrometer Amplifier With Overload Protection

    NASA Technical Reports Server (NTRS)

    Woeller, F. H.; Alexander, R.

    1986-01-01

    Circuit features low noise, input offset, and high linearity. Input preamplifier includes input-overload protection and nulling circuit to subtract dc offset from output. Prototype dc amplifier designed for use with ion detector has features desirable in general laboratory and field instrumentation.

  10. Clinical application of a light-pen computer system for quantitative angiography

    NASA Technical Reports Server (NTRS)

    Alderman, E. L.

    1975-01-01

    The important features in a clinical system for quantitative angiography were examined. The human interface for data input, whether an electrostatic pen, sonic pen, or light-pen must be engineered to optimize the quality of margin definition. The computer programs which the technician uses for data entry and computation of ventriculographic measurements must be convenient to use on a routine basis in a laboratory performing multiple studies per day. The method used for magnification correction must be continuously monitored.

  11. Hierarchical representation of shapes in visual cortex—from localized features to figural shape segregation

    PubMed Central

    Tschechne, Stephan; Neumann, Heiko

    2014-01-01

    Visual structures in the environment are segmented into image regions and those combined to a representation of surfaces and prototypical objects. Such a perceptual organization is performed by complex neural mechanisms in the visual cortex of primates. Multiple mutually connected areas in the ventral cortical pathway receive visual input and extract local form features that are subsequently grouped into increasingly complex, more meaningful image elements. Such a distributed network of processing must be capable to make accessible highly articulated changes in shape boundary as well as very subtle curvature changes that contribute to the perception of an object. We propose a recurrent computational network architecture that utilizes hierarchical distributed representations of shape features to encode surface and object boundary over different scales of resolution. Our model makes use of neural mechanisms that model the processing capabilities of early and intermediate stages in visual cortex, namely areas V1–V4 and IT. We suggest that multiple specialized component representations interact by feedforward hierarchical processing that is combined with feedback signals driven by representations generated at higher stages. Based on this, global configurational as well as local information is made available to distinguish changes in the object's contour. Once the outline of a shape has been established, contextual contour configurations are used to assign border ownership directions and thus achieve segregation of figure and ground. The model, thus, proposes how separate mechanisms contribute to distributed hierarchical cortical shape representation and combine with processes of figure-ground segregation. Our model is probed with a selection of stimuli to illustrate processing results at different processing stages. We especially highlight how modulatory feedback connections contribute to the processing of visual input at various stages in the processing hierarchy. PMID:25157228

  12. Hierarchical representation of shapes in visual cortex-from localized features to figural shape segregation.

    PubMed

    Tschechne, Stephan; Neumann, Heiko

    2014-01-01

    Visual structures in the environment are segmented into image regions and those combined to a representation of surfaces and prototypical objects. Such a perceptual organization is performed by complex neural mechanisms in the visual cortex of primates. Multiple mutually connected areas in the ventral cortical pathway receive visual input and extract local form features that are subsequently grouped into increasingly complex, more meaningful image elements. Such a distributed network of processing must be capable to make accessible highly articulated changes in shape boundary as well as very subtle curvature changes that contribute to the perception of an object. We propose a recurrent computational network architecture that utilizes hierarchical distributed representations of shape features to encode surface and object boundary over different scales of resolution. Our model makes use of neural mechanisms that model the processing capabilities of early and intermediate stages in visual cortex, namely areas V1-V4 and IT. We suggest that multiple specialized component representations interact by feedforward hierarchical processing that is combined with feedback signals driven by representations generated at higher stages. Based on this, global configurational as well as local information is made available to distinguish changes in the object's contour. Once the outline of a shape has been established, contextual contour configurations are used to assign border ownership directions and thus achieve segregation of figure and ground. The model, thus, proposes how separate mechanisms contribute to distributed hierarchical cortical shape representation and combine with processes of figure-ground segregation. Our model is probed with a selection of stimuli to illustrate processing results at different processing stages. We especially highlight how modulatory feedback connections contribute to the processing of visual input at various stages in the processing hierarchy.

  13. Segmentation and classification of brain images using firefly and hybrid kernel-based support vector machine

    NASA Astrophysics Data System (ADS)

    Selva Bhuvaneswari, K.; Geetha, P.

    2017-05-01

    Magnetic resonance imaging segmentation refers to a process of assigning labels to set of pixels or multiple regions. It plays a major role in the field of biomedical applications as it is widely used by the radiologists to segment the medical images input into meaningful regions. In recent years, various brain tumour detection techniques are presented in the literature. The entire segmentation process of our proposed work comprises three phases: threshold generation with dynamic modified region growing phase, texture feature generation phase and region merging phase. by dynamically changing two thresholds in the modified region growing approach, the first phase of the given input image can be performed as dynamic modified region growing process, in which the optimisation algorithm, firefly algorithm help to optimise the two thresholds in modified region growing. After obtaining the region growth segmented image using modified region growing, the edges can be detected with edge detection algorithm. In the second phase, the texture feature can be extracted using entropy-based operation from the input image. In region merging phase, the results obtained from the texture feature-generation phase are combined with the results of dynamic modified region growing phase and similar regions are merged using a distance comparison between regions. After identifying the abnormal tissues, the classification can be done by hybrid kernel-based SVM (Support Vector Machine). The performance analysis of the proposed method will be carried by K-cross fold validation method. The proposed method will be implemented in MATLAB with various images.

  14. Image processing tool for automatic feature recognition and quantification

    DOEpatents

    Chen, Xing; Stoddard, Ryan J.

    2017-05-02

    A system for defining structures within an image is described. The system includes reading of an input file, preprocessing the input file while preserving metadata such as scale information and then detecting features of the input file. In one version the detection first uses an edge detector followed by identification of features using a Hough transform. The output of the process is identified elements within the image.

  15. Computing all hybridization networks for multiple binary phylogenetic input trees.

    PubMed

    Albrecht, Benjamin

    2015-07-30

    The computation of phylogenetic trees on the same set of species that are based on different orthologous genes can lead to incongruent trees. One possible explanation for this behavior are interspecific hybridization events recombining genes of different species. An important approach to analyze such events is the computation of hybridization networks. This work presents the first algorithm computing the hybridization number as well as a set of representative hybridization networks for multiple binary phylogenetic input trees on the same set of taxa. To improve its practical runtime, we show how this algorithm can be parallelized. Moreover, we demonstrate the efficiency of the software Hybroscale, containing an implementation of our algorithm, by comparing it to PIRNv2.0, which is so far the best available software computing the exact hybridization number for multiple binary phylogenetic trees on the same set of taxa. The algorithm is part of the software Hybroscale, which was developed specifically for the investigation of hybridization networks including their computation and visualization. Hybroscale is freely available(1) and runs on all three major operating systems. Our simulation study indicates that our approach is on average 100 times faster than PIRNv2.0. Moreover, we show how Hybroscale improves the interpretation of the reported hybridization networks by adding certain features to its graphical representation.

  16. Adaptive Water Sampling based on Unsupervised Clustering

    NASA Astrophysics Data System (ADS)

    Py, F.; Ryan, J.; Rajan, K.; Sherman, A.; Bird, L.; Fox, M.; Long, D.

    2007-12-01

    Autonomous Underwater Vehicles (AUVs) are widely used for oceanographic surveys, during which data is collected from a number of on-board sensors. Engineers and scientists at MBARI have extended this approach by developing a water sampler specialy for the AUV, which can sample a specific patch of water at a specific time. The sampler, named the Gulper, captures 2 liters of seawater in less than 2 seconds on a 21" MBARI Odyssey AUV. Each sample chamber of the Gulper is filled with seawater through a one-way valve, which protrudes through the fairing of the AUV. This new kind of device raises a new problem: when to trigger the gulper autonomously? For example, scientists interested in studying the mobilization and transport of shelf sediments would like to detect intermediate nepheloïd layers (INLs). To be able to detect this phenomenon we need to extract a model based on AUV sensors that can detect this feature in-situ. The formation of such a model is not obvious as identification of this feature is generally based on data from multiple sensors. We have developed an unsupervised data clustering technique to extract the different features which will then be used for on-board classification and triggering of the Gulper. We use a three phase approach: 1) use data from past missions to learn the different classes of data from sensor inputs. The clustering algorithm will then extract the set of features that can be distinguished within this large data set. 2) Scientists on shore then identify these features and point out which correspond to those of interest (e.g. nepheloïd layer, upwelling material etc) 3) Embed the corresponding classifier into the AUV control system to indicate the most probable feature of the water depending on sensory input. The triggering algorithm looks to this result and triggers the Gulper if the classifier indicates that we are within the feature of interest with a predetermined threshold of confidence. We have deployed this method of online classification and sampling based on AUV depth and HOBI Labs Hydroscat-2 sensor data. Using approximately 20,000 data samples the clustering algorithm generated 14 clusters with one identified as corresponding to a nepheloïd layer. We demonstrate that such a technique can be used to reliably and efficiently sample water based on multiple sources of data in real-time.

  17. 40 CFR 75.16 - Special provisions for monitoring emissions from common, bypass, and multiple stacks for SO2...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... emissions from common, bypass, and multiple stacks for SO2 emissions and heat input determinations. 75.16... emissions from common, bypass, and multiple stacks for SO2 emissions and heat input determinations. (a... by the Administrator, such that these emissions are not underestimated. (e) Heat input rate. The...

  18. 40 CFR 75.16 - Special provisions for monitoring emissions from common, bypass, and multiple stacks for SO 2...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... emissions from common, bypass, and multiple stacks for SO 2 emissions and heat input determinations. 75.16... emissions from common, bypass, and multiple stacks for SO 2 emissions and heat input determinations. (a... by the Administrator, such that these emissions are not underestimated. (e) Heat input rate. The...

  19. 40 CFR 75.16 - Special provisions for monitoring emissions from common, bypass, and multiple stacks for SO2...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... emissions from common, bypass, and multiple stacks for SO2 emissions and heat input determinations. 75.16... emissions from common, bypass, and multiple stacks for SO2 emissions and heat input determinations. (a... by the Administrator, such that these emissions are not underestimated. (e) Heat input rate. The...

  20. 40 CFR 75.16 - Special provisions for monitoring emissions from common, bypass, and multiple stacks for SO 2...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... emissions from common, bypass, and multiple stacks for SO 2 emissions and heat input determinations. 75.16... emissions from common, bypass, and multiple stacks for SO 2 emissions and heat input determinations. (a... by the Administrator, such that these emissions are not underestimated. (e) Heat input rate. The...

  1. CBM First-level Event Selector Input Interface Demonstrator

    NASA Astrophysics Data System (ADS)

    Hutter, Dirk; de Cuveland, Jan; Lindenstruth, Volker

    2017-10-01

    CBM is a heavy-ion experiment at the future FAIR facility in Darmstadt, Germany. Featuring self-triggered front-end electronics and free-streaming read-out, event selection will exclusively be done by the First Level Event Selector (FLES). Designed as an HPC cluster with several hundred nodes its task is an online analysis and selection of the physics data at a total input data rate exceeding 1 TByte/s. To allow efficient event selection, the FLES performs timeslice building, which combines the data from all given input links to self-contained, potentially overlapping processing intervals and distributes them to compute nodes. Partitioning the input data streams into specialized containers allows performing this task very efficiently. The FLES Input Interface defines the linkage between the FEE and the FLES data transport framework. A custom FPGA PCIe board, the FLES Interface Board (FLIB), is used to receive data via optical links and transfer them via DMA to the host’s memory. The current prototype of the FLIB features a Kintex-7 FPGA and provides up to eight 10 GBit/s optical links. A custom FPGA design has been developed for this board. DMA transfers and data structures are optimized for subsequent timeslice building. Index tables generated by the FPGA enable fast random access to the written data containers. In addition the DMA target buffers can directly serve as InfiniBand RDMA source buffers without copying the data. The usage of POSIX shared memory for these buffers allows data access from multiple processes. An accompanying HDL module has been developed to integrate the FLES link into the front-end FPGA designs. It implements the front-end logic interface as well as the link protocol. Prototypes of all Input Interface components have been implemented and integrated into the FLES test framework. This allows the implementation and evaluation of the foreseen CBM read-out chain.

  2. Reduced multiple empirical kernel learning machine.

    PubMed

    Wang, Zhe; Lu, MingZhe; Gao, Daqi

    2015-02-01

    Multiple kernel learning (MKL) is demonstrated to be flexible and effective in depicting heterogeneous data sources since MKL can introduce multiple kernels rather than a single fixed kernel into applications. However, MKL would get a high time and space complexity in contrast to single kernel learning, which is not expected in real-world applications. Meanwhile, it is known that the kernel mapping ways of MKL generally have two forms including implicit kernel mapping and empirical kernel mapping (EKM), where the latter is less attracted. In this paper, we focus on the MKL with the EKM, and propose a reduced multiple empirical kernel learning machine named RMEKLM for short. To the best of our knowledge, it is the first to reduce both time and space complexity of the MKL with EKM. Different from the existing MKL, the proposed RMEKLM adopts the Gauss Elimination technique to extract a set of feature vectors, which is validated that doing so does not lose much information of the original feature space. Then RMEKLM adopts the extracted feature vectors to span a reduced orthonormal subspace of the feature space, which is visualized in terms of the geometry structure. It can be demonstrated that the spanned subspace is isomorphic to the original feature space, which means that the dot product of two vectors in the original feature space is equal to that of the two corresponding vectors in the generated orthonormal subspace. More importantly, the proposed RMEKLM brings a simpler computation and meanwhile needs a less storage space, especially in the processing of testing. Finally, the experimental results show that RMEKLM owns a much efficient and effective performance in terms of both complexity and classification. The contributions of this paper can be given as follows: (1) by mapping the input space into an orthonormal subspace, the geometry of the generated subspace is visualized; (2) this paper first reduces both the time and space complexity of the EKM-based MKL; (3) this paper adopts the Gauss Elimination, one of the on-the-shelf techniques, to generate a basis of the original feature space, which is stable and efficient.

  3. Canonical multi-valued input Reed-Muller trees and forms

    NASA Technical Reports Server (NTRS)

    Perkowski, M. A.; Johnson, P. D.

    1991-01-01

    There is recently an increased interest in logic synthesis using EXOR gates. The paper introduces the fundamental concept of Orthogonal Expansion, which generalizes the ring form of the Shannon expansion to the logic with multiple-valued (mv) inputs. Based on this concept we are able to define a family of canonical tree circuits. Such circuits can be considered for binary and multiple-valued input cases. They can be multi-level (trees and DAG's) or flattened to two-level AND-EXOR circuits. Input decoders similar to those used in Sum of Products (SOP) PLA's are used in realizations of multiple-valued input functions. In the case of the binary logic the family of flattened AND-EXOR circuits includes several forms discussed by Davio and Green. For the case of the logic with multiple-valued inputs, the family of the flattened mv AND-EXOR circuits includes three expansions known from literature and two new expansions.

  4. Building intelligent communication systems for handicapped aphasiacs.

    PubMed

    Fu, Yu-Fen; Ho, Cheng-Seen

    2010-01-01

    This paper presents an intelligent system allowing handicapped aphasiacs to perform basic communication tasks. It has the following three key features: (1) A 6-sensor data glove measures the finger gestures of a patient in terms of the bending degrees of his fingers. (2) A finger language recognition subsystem recognizes language components from the finger gestures. It employs multiple regression analysis to automatically extract proper finger features so that the recognition model can be fast and correctly constructed by a radial basis function neural network. (3) A coordinate-indexed virtual keyboard allows the users to directly access the letters on the keyboard at a practical speed. The system serves as a viable tool for natural and affordable communication for handicapped aphasiacs through continuous finger language input.

  5. Spatial attention improves the quality of population codes in human visual cortex.

    PubMed

    Saproo, Sameer; Serences, John T

    2010-08-01

    Selective attention enables sensory input from behaviorally relevant stimuli to be processed in greater detail, so that these stimuli can more accurately influence thoughts, actions, and future goals. Attention has been shown to modulate the spiking activity of single feature-selective neurons that encode basic stimulus properties (color, orientation, etc.). However, the combined output from many such neurons is required to form stable representations of relevant objects and little empirical work has formally investigated the relationship between attentional modulations on population responses and improvements in encoding precision. Here, we used functional MRI and voxel-based feature tuning functions to show that spatial attention induces a multiplicative scaling in orientation-selective population response profiles in early visual cortex. In turn, this multiplicative scaling correlates with an improvement in encoding precision, as evidenced by a concurrent increase in the mutual information between population responses and the orientation of attended stimuli. These data therefore demonstrate how multiplicative scaling of neural responses provides at least one mechanism by which spatial attention may improve the encoding precision of population codes. Increased encoding precision in early visual areas may then enhance the speed and accuracy of perceptual decisions computed by higher-order neural mechanisms.

  6. Classification of G-protein coupled receptors based on a rich generation of convolutional neural network, N-gram transformation and multiple sequence alignments.

    PubMed

    Li, Man; Ling, Cheng; Xu, Qi; Gao, Jingyang

    2018-02-01

    Sequence classification is crucial in predicting the function of newly discovered sequences. In recent years, the prediction of the incremental large-scale and diversity of sequences has heavily relied on the involvement of machine-learning algorithms. To improve prediction accuracy, these algorithms must confront the key challenge of extracting valuable features. In this work, we propose a feature-enhanced protein classification approach, considering the rich generation of multiple sequence alignment algorithms, N-gram probabilistic language model and the deep learning technique. The essence behind the proposed method is that if each group of sequences can be represented by one feature sequence, composed of homologous sites, there should be less loss when the sequence is rebuilt, when a more relevant sequence is added to the group. On the basis of this consideration, the prediction becomes whether a query sequence belonging to a group of sequences can be transferred to calculate the probability that the new feature sequence evolves from the original one. The proposed work focuses on the hierarchical classification of G-protein Coupled Receptors (GPCRs), which begins by extracting the feature sequences from the multiple sequence alignment results of the GPCRs sub-subfamilies. The N-gram model is then applied to construct the input vectors. Finally, these vectors are imported into a convolutional neural network to make a prediction. The experimental results elucidate that the proposed method provides significant performance improvements. The classification error rate of the proposed method is reduced by at least 4.67% (family level I) and 5.75% (family Level II), in comparison with the current state-of-the-art methods. The implementation program of the proposed work is freely available at: https://github.com/alanFchina/CNN .

  7. 40 CFR 75.82 - Monitoring of Hg mass emissions and heat input at common and multiple stacks.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... heat input at common and multiple stacks. 75.82 Section 75.82 Protection of Environment ENVIRONMENTAL... Provisions § 75.82 Monitoring of Hg mass emissions and heat input at common and multiple stacks. (a) Unit... systems and perform the Hg emission testing described under § 75.81(b). If reporting of the unit heat...

  8. Predictive Modeling of Implantation Outcome in an In Vitro Fertilization Setting: An Application of Machine Learning Methods.

    PubMed

    Uyar, Asli; Bener, Ayse; Ciray, H Nadir

    2015-08-01

    Multiple embryo transfers in in vitro fertilization (IVF) treatment increase the number of successful pregnancies while elevating the risk of multiple gestations. IVF-associated multiple pregnancies exhibit significant financial, social, and medical implications. Clinicians need to decide the number of embryos to be transferred considering the tradeoff between successful outcomes and multiple pregnancies. To predict implantation outcome of individual embryos in an IVF cycle with the aim of providing decision support on the number of embryos transferred. Retrospective cohort study. Electronic health records of one of the largest IVF clinics in Turkey. The study data set included 2453 embryos transferred at day 2 or day 3 after intracytoplasmic sperm injection (ICSI). Each embryo was represented with 18 clinical features and a class label, +1 or -1, indicating positive and negative implantation outcomes, respectively. For each classifier tested, a model was developed using two-thirds of the data set, and prediction performance was evaluated on the remaining one-third of the samples using receiver operating characteristic (ROC) analysis. The training-testing procedure was repeated 10 times on randomly split (two-thirds to one-third) data. The relative predictive values of clinical input characteristics were assessed using information gain feature weighting and forward feature selection methods. The naïve Bayes model provided 80.4% accuracy, 63.7% sensitivity, and 17.6% false alarm rate in embryo-based implantation prediction. Multiple embryo implantations were predicted at a 63.8% sensitivity level. Predictions using the proposed model resulted in higher accuracy compared with expert judgment alone (on average, 75.7% and 60.1%, respectively). A machine learning-based decision support system would be useful in improving the success rates of IVF treatment. © The Author(s) 2014.

  9. Analyses of requirements for computer control and data processing experiment subsystems. Volume 2: ATM experiment S-056 image data processing system software development

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The IDAPS (Image Data Processing System) is a user-oriented, computer-based, language and control system, which provides a framework or standard for implementing image data processing applications, simplifies set-up of image processing runs so that the system may be used without a working knowledge of computer programming or operation, streamlines operation of the image processing facility, and allows multiple applications to be run in sequence without operator interaction. The control system loads the operators, interprets the input, constructs the necessary parameters for each application, and cells the application. The overlay feature of the IBSYS loader (IBLDR) provides the means of running multiple operators which would otherwise overflow core storage.

  10. Automated Detection of Stereotypical Motor Movements in Autism Spectrum Disorder Using Recurrence Quantification Analysis

    PubMed Central

    Großekathöfer, Ulf; Manyakov, Nikolay V.; Mihajlović, Vojkan; Pandina, Gahan; Skalkin, Andrew; Ness, Seth; Bangerter, Abigail; Goodwin, Matthew S.

    2017-01-01

    A number of recent studies using accelerometer features as input to machine learning classifiers show promising results for automatically detecting stereotypical motor movements (SMM) in individuals with Autism Spectrum Disorder (ASD). However, replicating these results across different types of accelerometers and their position on the body still remains a challenge. We introduce a new set of features in this domain based on recurrence plot and quantification analyses that are orientation invariant and able to capture non-linear dynamics of SMM. Applying these features to an existing published data set containing acceleration data, we achieve up to 9% average increase in accuracy compared to current state-of-the-art published results. Furthermore, we provide evidence that a single torso sensor can automatically detect multiple types of SMM in ASD, and that our approach allows recognition of SMM with high accuracy in individuals when using a person-independent classifier. PMID:28261082

  11. Automated Detection of Stereotypical Motor Movements in Autism Spectrum Disorder Using Recurrence Quantification Analysis.

    PubMed

    Großekathöfer, Ulf; Manyakov, Nikolay V; Mihajlović, Vojkan; Pandina, Gahan; Skalkin, Andrew; Ness, Seth; Bangerter, Abigail; Goodwin, Matthew S

    2017-01-01

    A number of recent studies using accelerometer features as input to machine learning classifiers show promising results for automatically detecting stereotypical motor movements (SMM) in individuals with Autism Spectrum Disorder (ASD). However, replicating these results across different types of accelerometers and their position on the body still remains a challenge. We introduce a new set of features in this domain based on recurrence plot and quantification analyses that are orientation invariant and able to capture non-linear dynamics of SMM. Applying these features to an existing published data set containing acceleration data, we achieve up to 9% average increase in accuracy compared to current state-of-the-art published results. Furthermore, we provide evidence that a single torso sensor can automatically detect multiple types of SMM in ASD, and that our approach allows recognition of SMM with high accuracy in individuals when using a person-independent classifier.

  12. A neural network ActiveX based integrated image processing environment.

    PubMed

    Ciuca, I; Jitaru, E; Alaicescu, M; Moisil, I

    2000-01-01

    The paper outlines an integrated image processing environment that uses neural networks ActiveX technology for object recognition and classification. The image processing environment which is Windows based, encapsulates a Multiple-Document Interface (MDI) and is menu driven. Object (shape) parameter extraction is focused on features that are invariant in terms of translation, rotation and scale transformations. The neural network models that can be incorporated as ActiveX components into the environment allow both clustering and classification of objects from the analysed image. Mapping neural networks perform an input sensitivity analysis on the extracted feature measurements and thus facilitate the removal of irrelevant features and improvements in the degree of generalisation. The program has been used to evaluate the dimensions of the hydrocephalus in a study for calculating the Evans index and the angle of the frontal horns of the ventricular system modifications.

  13. Extended H2 synthesis for multiple degree-of-freedom controllers

    NASA Technical Reports Server (NTRS)

    Hampton, R. David; Knospe, Carl R.

    1992-01-01

    H2 synthesis techniques are developed for a general multiple-input-multiple-output (MIMO) system subject to both stochastic and deterministic disturbances. The H2 synthesis is extended by incorporation of anticipated disturbances power-spectral-density information into the controller-design process, as well as by frequency weightings of generalized coordinates and control inputs. The methodology is applied to a simple single-input-multiple-output (SIMO) problem, analogous to the type of vibration isolation problem anticipated in microgravity research experiments.

  14. A two-dimensional graphing program for the Tektronix 4050-series graphics computers

    USGS Publications Warehouse

    Kipp, K.L.

    1983-01-01

    A refined, two-dimensional graph-plotting program was developed for use on Tektronix 4050-series graphics computers. Important features of this program include: any combination of logarithmic and linear axes, optional automatic scaling and numbering of the axes, multiple-curve plots, character or drawn symbol-point plotting, optional cartridge-tape data input and plot-format storage, optional spline fitting for smooth curves, and built-in data-editing options. The program is run while the Tektronix is not connected to any large auxiliary computer, although data from files on an auxiliary computer easily can be transferred to data-cartridge for later plotting. The user is led through the plot-construction process by a series of questions and requests for data input. Five example plots are presented to illustrate program capability and the sequence of program operation. (USGS)

  15. On-Chip Power-Combining for High-Power Schottky Diode-Based Frequency Multipliers

    NASA Technical Reports Server (NTRS)

    Chattopadhyay, Goutam; Mehdi, Imran; Schlecht, Erich T.; Lee, Choonsup; Siles, Jose V.; Maestrini, Alain E.; Thomas, Bertrand; Jung, Cecile D.

    2013-01-01

    A 1.6-THz power-combined Schottky frequency tripler was designed to handle approximately 30 mW input power. The design of Schottky-based triplers at this frequency range is mainly constrained by the shrinkage of the waveguide dimensions with frequency and the minimum diode mesa sizes, which limits the maximum number of diodes that can be placed on the chip to no more than two. Hence, multiple-chip power-combined schemes become necessary to increase the power-handling capabilities of high-frequency multipliers. The design presented here overcomes difficulties by performing the power-combining directly on-chip. Four E-probes are located at a single input waveguide in order to equally pump four multiplying structures (featuring two diodes each). The produced output power is then recombined at the output using the same concept.

  16. A High Efficiency Multiple-Anode 260-340 GHz Frequency Tripler

    NASA Technical Reports Server (NTRS)

    Maestrini, Alain; Tripon-Canseliet, Charlotte; Ward, John S.; Gill, John J.; Mehdi, Imran

    2006-01-01

    We report on the fabrication at the Jet Propulsion Laboratory of a fixed-tuned split-block waveguide balanced frequency tripler working in the 260-340 GHz band. This tripler will be the first stage of a x3x3x3 multiplier chain to 2.7 THz (the last stages of which are being fabricated at JPL) and is therefore optimized for high power operation. The multiplier features six GaAs Schottky planar diodes in a balanced configuration integrated on a GaAs membrane. Special attention was put on splitting the input power as evenly as possible among the diodes in order to ensure that no diode is overdriven. Preliminary RF tests indicate that the multiplier covers the expected bandwidth and that the efficiency is in the range 1.5-7.5 % with 100 mW of input power.

  17. Multiple Input Design for Real-Time Parameter Estimation in the Frequency Domain

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene

    2003-01-01

    A method for designing multiple inputs for real-time dynamic system identification in the frequency domain was developed and demonstrated. The designed inputs are mutually orthogonal in both the time and frequency domains, with reduced peak factors to provide good information content for relatively small amplitude excursions. The inputs are designed for selected frequency ranges, and therefore do not require a priori models. The experiment design approach was applied to identify linear dynamic models for the F-15 ACTIVE aircraft, which has multiple control effectors.

  18. Impaired visual search in rats reveals cholinergic contributions to feature binding in visuospatial attention.

    PubMed

    Botly, Leigh C P; De Rosa, Eve

    2012-10-01

    The visual search task established the feature integration theory of attention in humans and measures visuospatial attentional contributions to feature binding. We recently demonstrated that the neuromodulator acetylcholine (ACh), from the nucleus basalis magnocellularis (NBM), supports the attentional processes required for feature binding using a rat digging-based task. Additional research has demonstrated cholinergic contributions from the NBM to visuospatial attention in rats. Here, we combined these lines of evidence and employed visual search in rats to examine whether cortical cholinergic input supports visuospatial attention specifically for feature binding. We trained 18 male Long-Evans rats to perform visual search using touch screen-equipped operant chambers. Sessions comprised Feature Search (no feature binding required) and Conjunctive Search (feature binding required) trials using multiple stimulus set sizes. Following acquisition of visual search, 8 rats received bilateral NBM lesions using 192 IgG-saporin to selectively reduce cholinergic afferentation of the neocortex, which we hypothesized would selectively disrupt the visuospatial attentional processes needed for efficient conjunctive visual search. As expected, relative to sham-lesioned rats, ACh-NBM-lesioned rats took significantly longer to locate the target stimulus on Conjunctive Search, but not Feature Search trials, thus demonstrating that cholinergic contributions to visuospatial attention are important for feature binding in rats.

  19. Optimal input selection for neural machine interfaces predicting multiple non-explicit outputs.

    PubMed

    Krepkovich, Eileen T; Perreault, Eric J

    2008-01-01

    This study implemented a novel algorithm that optimally selects inputs for neural machine interface (NMI) devices intended to control multiple outputs and evaluated its performance on systems lacking explicit output. NMIs often incorporate signals from multiple physiological sources and provide predictions for multidimensional control, leading to multiple-input multiple-output systems. Further, NMIs often are used with subjects who have motor disabilities and thus lack explicit motor outputs. Our algorithm was tested on simulated multiple-input multiple-output systems and on electromyogram and kinematic data collected from healthy subjects performing arm reaches. Effects of output noise in simulated systems indicated that the algorithm could be useful for systems with poor estimates of the output states, as is true for systems lacking explicit motor output. To test efficacy on physiological data, selection was performed using inputs from one subject and outputs from a different subject. Selection was effective for these cases, again indicating that this algorithm will be useful for predictions where there is no motor output, as often is the case for disabled subjects. Further, prediction results generalized for different movement types not used for estimation. These results demonstrate the efficacy of this algorithm for the development of neural machine interfaces.

  20. The Face-Processing Network Is Resilient to Focal Resection of Human Visual Cortex

    PubMed Central

    Jonas, Jacques; Gomez, Jesse; Maillard, Louis; Brissart, Hélène; Hossu, Gabriela; Jacques, Corentin; Loftus, David; Colnat-Coulbois, Sophie; Stigliani, Anthony; Barnett, Michael A.; Grill-Spector, Kalanit; Rossion, Bruno

    2016-01-01

    Human face perception requires a network of brain regions distributed throughout the occipital and temporal lobes with a right hemisphere advantage. Present theories consider this network as either a processing hierarchy beginning with the inferior occipital gyrus (occipital face area; IOG-faces/OFA) or a multiple-route network with nonhierarchical components. The former predicts that removing IOG-faces/OFA will detrimentally affect downstream stages, whereas the latter does not. We tested this prediction in a human patient (Patient S.P.) requiring removal of the right inferior occipital cortex, including IOG-faces/OFA. We acquired multiple fMRI measurements in Patient S.P. before and after a preplanned surgery and multiple measurements in typical controls, enabling both within-subject/across-session comparisons (Patient S.P. before resection vs Patient S.P. after resection) and between-subject/across-session comparisons (Patient S.P. vs controls). We found that the spatial topology and selectivity of downstream ipsilateral face-selective regions were stable 1 and 8 month(s) after surgery. Additionally, the reliability of distributed patterns of face selectivity in Patient S.P. before versus after resection was not different from across-session reliability in controls. Nevertheless, postoperatively, representations of visual space were typical in dorsal face-selective regions but atypical in ventral face-selective regions and V1 of the resected hemisphere. Diffusion weighted imaging in Patient S.P. and controls identifies white matter tracts connecting retinotopic areas to downstream face-selective regions, which may contribute to the stable and plastic features of the face network in Patient S.P. after surgery. Together, our results support a multiple-route network of face processing with nonhierarchical components and shed light on stable and plastic features of high-level visual cortex following focal brain damage. SIGNIFICANCE STATEMENT Brain networks consist of interconnected functional regions commonly organized in processing hierarchies. Prevailing theories predict that damage to the input of the hierarchy will detrimentally affect later stages. We tested this prediction with multiple brain measurements in a rare human patient requiring surgical removal of the putative input to a network processing faces. Surprisingly, the spatial topology and selectivity of downstream face-selective regions are stable after surgery. Nevertheless, representations of visual space were typical in dorsal face-selective regions but atypical in ventral face-selective regions and V1. White matter connections from outside the face network may support these stable and plastic features. As processing hierarchies are ubiquitous in biological and nonbiological systems, our results have pervasive implications for understanding the construction of resilient networks. PMID:27511014

  1. Feature Extraction of Electronic Nose Signals Using QPSO-Based Multiple KFDA Signal Processing

    PubMed Central

    Wen, Tailai; Huang, Daoyu; Lu, Kun; Deng, Changjian; Zeng, Tanyue; Yu, Song; He, Zhiyi

    2018-01-01

    The aim of this research was to enhance the classification accuracy of an electronic nose (E-nose) in different detecting applications. During the learning process of the E-nose to predict the types of different odors, the prediction accuracy was not quite satisfying because the raw features extracted from sensors’ responses were regarded as the input of a classifier without any feature extraction processing. Therefore, in order to obtain more useful information and improve the E-nose’s classification accuracy, in this paper, a Weighted Kernels Fisher Discriminant Analysis (WKFDA) combined with Quantum-behaved Particle Swarm Optimization (QPSO), i.e., QWKFDA, was presented to reprocess the original feature matrix. In addition, we have also compared the proposed method with quite a few previously existing ones including Principal Component Analysis (PCA), Locality Preserving Projections (LPP), Fisher Discriminant Analysis (FDA) and Kernels Fisher Discriminant Analysis (KFDA). Experimental results proved that QWKFDA is an effective feature extraction method for E-nose in predicting the types of wound infection and inflammable gases, which shared much higher classification accuracy than those of the contrast methods. PMID:29382146

  2. Feature Extraction of Electronic Nose Signals Using QPSO-Based Multiple KFDA Signal Processing.

    PubMed

    Wen, Tailai; Yan, Jia; Huang, Daoyu; Lu, Kun; Deng, Changjian; Zeng, Tanyue; Yu, Song; He, Zhiyi

    2018-01-29

    The aim of this research was to enhance the classification accuracy of an electronic nose (E-nose) in different detecting applications. During the learning process of the E-nose to predict the types of different odors, the prediction accuracy was not quite satisfying because the raw features extracted from sensors' responses were regarded as the input of a classifier without any feature extraction processing. Therefore, in order to obtain more useful information and improve the E-nose's classification accuracy, in this paper, a Weighted Kernels Fisher Discriminant Analysis (WKFDA) combined with Quantum-behaved Particle Swarm Optimization (QPSO), i.e., QWKFDA, was presented to reprocess the original feature matrix. In addition, we have also compared the proposed method with quite a few previously existing ones including Principal Component Analysis (PCA), Locality Preserving Projections (LPP), Fisher Discriminant Analysis (FDA) and Kernels Fisher Discriminant Analysis (KFDA). Experimental results proved that QWKFDA is an effective feature extraction method for E-nose in predicting the types of wound infection and inflammable gases, which shared much higher classification accuracy than those of the contrast methods.

  3. Distance error correction for time-of-flight cameras

    NASA Astrophysics Data System (ADS)

    Fuersattel, Peter; Schaller, Christian; Maier, Andreas; Riess, Christian

    2017-06-01

    The measurement accuracy of time-of-flight cameras is limited due to properties of the scene and systematic errors. These errors can accumulate to multiple centimeters which may limit the applicability of these range sensors. In the past, different approaches have been proposed for improving the accuracy of these cameras. In this work, we propose a new method that improves two important aspects of the range calibration. First, we propose a new checkerboard which is augmented by a gray-level gradient. With this addition it becomes possible to capture the calibration features for intrinsic and distance calibration at the same time. The gradient strip allows to acquire a large amount of distance measurements for different surface reflectivities, which results in more meaningful training data. Second, we present multiple new features which are used as input to a random forest regressor. By using random regression forests, we circumvent the problem of finding an accurate model for the measurement error. During application, a correction value for each individual pixel is estimated with the trained forest based on a specifically tailored feature vector. With our approach the measurement error can be reduced by more than 40% for the Mesa SR4000 and by more than 30% for the Microsoft Kinect V2. In our evaluation we also investigate the impact of the individual forest parameters and illustrate the importance of the individual features.

  4. Decoding visual object categories from temporal correlations of ECoG signals.

    PubMed

    Majima, Kei; Matsuo, Takeshi; Kawasaki, Keisuke; Kawai, Kensuke; Saito, Nobuhito; Hasegawa, Isao; Kamitani, Yukiyasu

    2014-04-15

    How visual object categories are represented in the brain is one of the key questions in neuroscience. Studies on low-level visual features have shown that relative timings or phases of neural activity between multiple brain locations encode information. However, whether such temporal patterns of neural activity are used in the representation of visual objects is unknown. Here, we examined whether and how visual object categories could be predicted (or decoded) from temporal patterns of electrocorticographic (ECoG) signals from the temporal cortex in five patients with epilepsy. We used temporal correlations between electrodes as input features, and compared the decoding performance with features defined by spectral power and phase from individual electrodes. While using power or phase alone, the decoding accuracy was significantly better than chance, correlations alone or those combined with power outperformed other features. Decoding performance with correlations was degraded by shuffling the order of trials of the same category in each electrode, indicating that the relative time series between electrodes in each trial is critical. Analysis using a sliding time window revealed that decoding performance with correlations began to rise earlier than that with power. This earlier increase in performance was replicated by a model using phase differences to encode categories. These results suggest that activity patterns arising from interactions between multiple neuronal units carry additional information on visual object categories. Copyright © 2013 Elsevier Inc. All rights reserved.

  5. Fuzzy logic modeling of the resistivity parameter and topography features for aquifer assessment in hydrogeological investigation of a crystalline basement complex

    NASA Astrophysics Data System (ADS)

    Adabanija, M. A.; Omidiora, E. O.; Olayinka, A. I.

    2008-05-01

    A linguistic fuzzy logic system (LFLS)-based expert system model has been developed for the assessment of aquifers for the location of productive water boreholes in a crystalline basement complex. The model design employed a multiple input/single output (MISO) approach with geoelectrical parameters and topographic features as input variables and control crisp value as the output. The application of the method to the data acquired in Khondalitic terrain, a basement complex in Vizianagaram District, south India, shows that potential groundwater resource zones that have control output values in the range 0.3295-0.3484 have a yield greater than 6,000 liters per hour (LPH). The range 0.3174-0.3226 gives a yield less than 4,000 LPH. The validation of the control crisp value using data acquired from Oban Massif, a basement complex in southeastern Nigeria, indicates a yield less than 3,000 LPH for control output values in the range 0.2938-0.3065. This validation corroborates the ability of control output values to predict a yield, thereby vindicating the applicability of linguistic fuzzy logic system in siting productive water boreholes in a basement complex.

  6. Unsupervised Metric Fusion Over Multiview Data by Graph Random Walk-Based Cross-View Diffusion.

    PubMed

    Wang, Yang; Zhang, Wenjie; Wu, Lin; Lin, Xuemin; Zhao, Xiang

    2017-01-01

    Learning an ideal metric is crucial to many tasks in computer vision. Diverse feature representations may combat this problem from different aspects; as visual data objects described by multiple features can be decomposed into multiple views, thus often provide complementary information. In this paper, we propose a cross-view fusion algorithm that leads to a similarity metric for multiview data by systematically fusing multiple similarity measures. Unlike existing paradigms, we focus on learning distance measure by exploiting a graph structure of data samples, where an input similarity matrix can be improved through a propagation of graph random walk. In particular, we construct multiple graphs with each one corresponding to an individual view, and a cross-view fusion approach based on graph random walk is presented to derive an optimal distance measure by fusing multiple metrics. Our method is scalable to a large amount of data by enforcing sparsity through an anchor graph representation. To adaptively control the effects of different views, we dynamically learn view-specific coefficients, which are leveraged into graph random walk to balance multiviews. However, such a strategy may lead to an over-smooth similarity metric where affinities between dissimilar samples may be enlarged by excessively conducting cross-view fusion. Thus, we figure out a heuristic approach to controlling the iteration number in the fusion process in order to avoid over smoothness. Extensive experiments conducted on real-world data sets validate the effectiveness and efficiency of our approach.

  7. Prediction of Cognitive States During Flight Simulation Using Multimodal Psychophysiological Sensing

    NASA Technical Reports Server (NTRS)

    Harrivel, Angela R.; Stephens, Chad L.; Milletich, Robert J.; Heinich, Christina M.; Last, Mary Carolyn; Napoli, Nicholas J.; Abraham, Nijo A.; Prinzel, Lawrence J.; Motter, Mark A.; Pope, Alan T.

    2017-01-01

    The Commercial Aviation Safety Team found the majority of recent international commercial aviation accidents attributable to loss of control inflight involved flight crew loss of airplane state awareness (ASA), and distraction was involved in all of them. Research on attention-related human performance limiting states (AHPLS) such as channelized attention, diverted attention, startle/surprise, and confirmation bias, has been recommended in a Safety Enhancement (SE) entitled "Training for Attention Management." To accomplish the detection of such cognitive and psychophysiological states, a broad suite of sensors was implemented to simultaneously measure their physiological markers during a high fidelity flight simulation human subject study. Twenty-four pilot participants were asked to wear the sensors while they performed benchmark tasks and motion-based flight scenarios designed to induce AHPLS. Pattern classification was employed to predict the occurrence of AHPLS during flight simulation also designed to induce those states. Classifier training data were collected during performance of the benchmark tasks. Multimodal classification was performed, using pre-processed electroencephalography, galvanic skin response, electrocardiogram, and respiration signals as input features. A combination of one, some or all modalities were used. Extreme gradient boosting, random forest and two support vector machine classifiers were implemented. The best accuracy for each modality-classifier combination is reported. Results using a select set of features and using the full set of available features are presented. Further, results are presented for training one classifier with the combined features and for training multiple classifiers with features from each modality separately. Using the select set of features and combined training, multistate prediction accuracy averaged 0.64 +/- 0.14 across thirteen participants and was significantly higher than that for the separate training case. These results support the goal of demonstrating simultaneous real-time classification of multiple states using multiple sensing modalities in high fidelity flight simulators. This detection is intended to support and inform training methods under development to mitigate the loss of ASA and thus reduce accidents and incidents.

  8. Multiple-Input Multiple-Output (MIMO) Linear Systems Extreme Inputs/Outputs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smallwood, David O.

    2007-01-01

    A linear structure is excited at multiple points with a stationary normal random process. The response of the structure is measured at multiple outputs. If the autospectral densities of the inputs are specified, the phase relationships between the inputs are derived that will minimize or maximize the trace of the autospectral density matrix of the outputs. If the autospectral densities of the outputs are specified, the phase relationships between the outputs that will minimize or maximize the trace of the input autospectral density matrix are derived. It is shown that other phase relationships and ordinary coherence less than one willmore » result in a trace intermediate between these extremes. Least favorable response and some classes of critical response are special cases of the development. It is shown that the derivation for stationary random waveforms can also be applied to nonstationary random, transients, and deterministic waveforms.« less

  9. Matrix Methods for Estimating the Coherence Functions from Estimates of the Cross-Spectral Density Matrix

    DOE PAGES

    Smallwood, D. O.

    1996-01-01

    It is shown that the usual method for estimating the coherence functions (ordinary, partial, and multiple) for a general multiple-input! multiple-output problem can be expressed as a modified form of Cholesky decomposition of the cross-spectral density matrix of the input and output records. The results can be equivalently obtained using singular value decomposition (SVD) of the cross-spectral density matrix. Using SVD suggests a new form of fractional coherence. The formulation as a SVD problem also suggests a way to order the inputs when a natural physical order of the inputs is absent.

  10. A Deep Ensemble Learning Method for Monaural Speech Separation.

    PubMed

    Zhang, Xiao-Lei; Wang, DeLiang

    2016-03-01

    Monaural speech separation is a fundamental problem in robust speech processing. Recently, deep neural network (DNN)-based speech separation methods, which predict either clean speech or an ideal time-frequency mask, have demonstrated remarkable performance improvement. However, a single DNN with a given window length does not leverage contextual information sufficiently, and the differences between the two optimization objectives are not well understood. In this paper, we propose a deep ensemble method, named multicontext networks, to address monaural speech separation. The first multicontext network averages the outputs of multiple DNNs whose inputs employ different window lengths. The second multicontext network is a stack of multiple DNNs. Each DNN in a module of the stack takes the concatenation of original acoustic features and expansion of the soft output of the lower module as its input, and predicts the ratio mask of the target speaker; the DNNs in the same module employ different contexts. We have conducted extensive experiments with three speech corpora. The results demonstrate the effectiveness of the proposed method. We have also compared the two optimization objectives systematically and found that predicting the ideal time-frequency mask is more efficient in utilizing clean training speech, while predicting clean speech is less sensitive to SNR variations.

  11. Multi-output differential technologies

    NASA Astrophysics Data System (ADS)

    Bidare, Srinivas R.

    1997-01-01

    A differential is a very old and proven mechanical device that allows a single input to be split into two outputs having equal torque irrespective of the output speeds. A standard differential is capable of providing only two outputs from a single input. A recently patented multi-output differential technology known as `Plural-Output Differential' allows a single input to be split into many outputs. This new technology is the outcome of a systematic study of complex gear trains (Bidare 1992). The unique feature of a differential (equal torque at different speeds) can be applied to simplify the construction and operation of many complex mechanical devices that require equal torque's or forces at multiple outputs. It is now possible to design a mechanical hand with three or more fingers with equal torque. Since these finger are powered via a differential they are `mechanically intelligent'. A prototype device is operational and has been used to demonstrate the utility and flexibility of the design. In this paper we shall review two devices that utilize the new technology resulting in increased performance, robustness with reduced complexity and cost.

  12. Development of Web tools to predict axillary lymph node metastasis and pathological response to neoadjuvant chemotherapy in breast cancer patients.

    PubMed

    Sugimoto, Masahiro; Takada, Masahiro; Toi, Masakazu

    2014-12-09

    Nomograms are a standard computational tool to predict the likelihood of an outcome using multiple available patient features. We have developed a more powerful data mining methodology, to predict axillary lymph node (AxLN) metastasis and response to neoadjuvant chemotherapy (NAC) in primary breast cancer patients. We developed websites to use these tools. The tools calculate the probability of AxLN metastasis (AxLN model) and pathological complete response to NAC (NAC model). As a calculation algorithm, we employed a decision tree-based prediction model known as the alternative decision tree (ADTree), which is an analog development of if-then type decision trees. An ensemble technique was used to combine multiple ADTree predictions, resulting in higher generalization abilities and robustness against missing values. The AxLN model was developed with training datasets (n=148) and test datasets (n=143), and validated using an independent cohort (n=174), yielding an area under the receiver operating characteristic curve (AUC) of 0.768. The NAC model was developed and validated with n=150 and n=173 datasets from a randomized controlled trial, yielding an AUC of 0.787. AxLN and NAC models require users to input up to 17 and 16 variables, respectively. These include pathological features, including human epidermal growth factor receptor 2 (HER2) status and imaging findings. Each input variable has an option of "unknown," to facilitate prediction for cases with missing values. The websites developed facilitate the use of these tools, and serve as a database for accumulating new datasets.

  13. A trainable decisions-in decision-out (DEI-DEO) fusion system

    NASA Astrophysics Data System (ADS)

    Dasarathy, Belur V.

    1998-03-01

    Most of the decision fusion systems proposed hitherto in the literature for multiple data source (sensor) environments operate on the basis of pre-defined fusion logic, be they crisp (deterministic), probabilistic, or fuzzy in nature, with no specific learning phase. The fusion systems that are trainable, i.e., ones that have a learning phase, mostly operate in the features-in-decision-out mode, which essentially reduces the fusion process functionally to a pattern classification task in the joint feature space. In this study, a trainable decisions-in-decision-out fusion system is described which estimates a fuzzy membership distribution spread across the different decision choices based on the performance of the different decision processors (sensors) corresponding to each training sample (object) which is associated with a specific ground truth (true decision). Based on a multi-decision space histogram analysis of the performance of the different processors over the entire training data set, a look-up table associating each cell of the histogram with a specific true decision is generated which forms the basis for the operational phase. In the operational phase, for each set of decision inputs, a pointer to the look-up table learnt previously is generated from which a fused decision is derived. This methodology, although primarily designed for fusing crisp decisions from the multiple decision sources, can be adapted for fusion of fuzzy decisions as well if such are the inputs from these sources. Examples, which illustrate the benefits and limitations of the crisp and fuzzy versions of the trainable fusion systems, are also included.

  14. In vitro design of a novel lytic bacteriophage cocktail with therapeutic potential against organisms causing diabetic foot infections.

    PubMed

    Mendes, João J; Leandro, Clara; Mottola, Carla; Barbosa, Raquel; Silva, Filipa A; Oliveira, Manuela; Vilela, Cristina L; Melo-Cristino, José; Górski, Andrzej; Pimentel, Madalena; São-José, Carlos; Cavaco-Silva, Patrícia; Garcia, Miguel

    2014-08-01

    In patients with diabetes mellitus, foot infections pose a significant risk. These are complex infections commonly caused by Staphylococcus aureus, Pseudomonas aeruginosa and Acinetobacter baumannii, all of which are potentially susceptible to bacteriophages. Here, we characterized five bacteriophages that we had determined previously to have antimicrobial and wound-healing potential in chronic S. aureus, P. aeruginosa and A. baumannii infections. Morphological and genetic features indicated that the bacteriophages were lytic members of the family Myoviridae or Podoviridae and did not harbour any known bacterial virulence genes. Combinations of the bacteriophages had broad host ranges for the different target bacterial species. The activity of the bacteriophages against planktonic cells revealed effective, early killing at 4 h, followed by bacterial regrowth to pre-treatment levels by 24 h. Using metabolic activity as a measure of cell viability within established biofilms, we found significant cell impairment following bacteriophage exposure. Repeated treatment every 4 h caused a further decrease in cell activity. The greatest effects on both planktonic and biofilm cells occurred at a bacteriophage : bacterium input multiplicity of 10. These studies on both planktonic cells and established biofilms allowed us to better evaluate the effects of a high input multiplicity and a multiple-dose treatment protocol, and the findings support further clinical development of bacteriophage therapy. © 2014 The Authors.

  15. A nonlinear control method based on ANFIS and multiple models for a class of SISO nonlinear systems and its application.

    PubMed

    Zhang, Yajun; Chai, Tianyou; Wang, Hong

    2011-11-01

    This paper presents a novel nonlinear control strategy for a class of uncertain single-input and single-output discrete-time nonlinear systems with unstable zero-dynamics. The proposed method combines adaptive-network-based fuzzy inference system (ANFIS) with multiple models, where a linear robust controller, an ANFIS-based nonlinear controller and a switching mechanism are integrated using multiple models technique. It has been shown that the linear controller can ensure the boundedness of the input and output signals and the nonlinear controller can improve the dynamic performance of the closed loop system. Moreover, it has also been shown that the use of the switching mechanism can simultaneously guarantee the closed loop stability and improve its performance. As a result, the controller has the following three outstanding features compared with existing control strategies. First, this method relaxes the assumption of commonly-used uniform boundedness on the unmodeled dynamics and thus enhances its applicability. Second, since ANFIS is used to estimate and compensate the effect caused by the unmodeled dynamics, the convergence rate of neural network learning has been increased. Third, a "one-to-one mapping" technique is adapted to guarantee the universal approximation property of ANFIS. The proposed controller is applied to a numerical example and a pulverizing process of an alumina sintering system, respectively, where its effectiveness has been justified.

  16. Probabilistic switching circuits in DNA

    PubMed Central

    Wilhelm, Daniel; Bruck, Jehoshua

    2018-01-01

    A natural feature of molecular systems is their inherent stochastic behavior. A fundamental challenge related to the programming of molecular information processing systems is to develop a circuit architecture that controls the stochastic states of individual molecular events. Here we present a systematic implementation of probabilistic switching circuits, using DNA strand displacement reactions. Exploiting the intrinsic stochasticity of molecular interactions, we developed a simple, unbiased DNA switch: An input signal strand binds to the switch and releases an output signal strand with probability one-half. Using this unbiased switch as a molecular building block, we designed DNA circuits that convert an input signal to an output signal with any desired probability. Further, this probability can be switched between 2n different values by simply varying the presence or absence of n distinct DNA molecules. We demonstrated several DNA circuits that have multiple layers and feedback, including a circuit that converts an input strand to an output strand with eight different probabilities, controlled by the combination of three DNA molecules. These circuits combine the advantages of digital and analog computation: They allow a small number of distinct input molecules to control a diverse signal range of output molecules, while keeping the inputs robust to noise and the outputs at precise values. Moreover, arbitrarily complex circuit behaviors can be implemented with just a single type of molecular building block. PMID:29339484

  17. Evaluation and recommendation of sensitivity analysis methods for application to Stochastic Human Exposure and Dose Simulation models.

    PubMed

    Mokhtari, Amirhossein; Christopher Frey, H; Zheng, Junyu

    2006-11-01

    Sensitivity analyses of exposure or risk models can help identify the most significant factors to aid in risk management or to prioritize additional research to reduce uncertainty in the estimates. However, sensitivity analysis is challenged by non-linearity, interactions between inputs, and multiple days or time scales. Selected sensitivity analysis methods are evaluated with respect to their applicability to human exposure models with such features using a testbed. The testbed is a simplified version of a US Environmental Protection Agency's Stochastic Human Exposure and Dose Simulation (SHEDS) model. The methods evaluated include the Pearson and Spearman correlation, sample and rank regression, analysis of variance, Fourier amplitude sensitivity test (FAST), and Sobol's method. The first five methods are known as "sampling-based" techniques, wheras the latter two methods are known as "variance-based" techniques. The main objective of the test cases was to identify the main and total contributions of individual inputs to the output variance. Sobol's method and FAST directly quantified these measures of sensitivity. Results show that sensitivity of an input typically changed when evaluated under different time scales (e.g., daily versus monthly). All methods provided similar insights regarding less important inputs; however, Sobol's method and FAST provided more robust insights with respect to sensitivity of important inputs compared to the sampling-based techniques. Thus, the sampling-based methods can be used in a screening step to identify unimportant inputs, followed by application of more computationally intensive refined methods to a smaller set of inputs. The implications of time variation in sensitivity results for risk management are briefly discussed.

  18. Optimal Implementations for Reliable Circadian Clocks

    NASA Astrophysics Data System (ADS)

    Hasegawa, Yoshihiko; Arita, Masanori

    2014-09-01

    Circadian rhythms are acquired through evolution to increase the chances for survival through synchronizing with the daylight cycle. Reliable synchronization is realized through two trade-off properties: regularity to keep time precisely, and entrainability to synchronize the internal time with daylight. We find by using a phase model with multiple inputs that achieving the maximal limit of regularity and entrainability entails many inherent features of the circadian mechanism. At the molecular level, we demonstrate the role sharing of two light inputs, phase advance and delay, as is well observed in mammals. At the behavioral level, the optimal phase-response curve inevitably contains a dead zone, a time during which light pulses neither advance nor delay the clock. We reproduce the results of phase-controlling experiments entrained by two types of periodic light pulses. Our results indicate that circadian clocks are designed optimally for reliable clockwork through evolution.

  19. Human-computer interaction for alert warning and attention allocation systems of the multimodal watchstation

    NASA Astrophysics Data System (ADS)

    Obermayer, Richard W.; Nugent, William A.

    2000-11-01

    The SPAWAR Systems Center San Diego is currently developing an advanced Multi-Modal Watchstation (MMWS); design concepts and software from this effort are intended for transition to future United States Navy surface combatants. The MMWS features multiple flat panel displays and several modes of user interaction, including voice input and output, natural language recognition, 3D audio, stylus and gestural inputs. In 1999, an extensive literature review was conducted on basic and applied research concerned with alerting and warning systems. After summarizing that literature, a human computer interaction (HCI) designer's guide was prepared to support the design of an attention allocation subsystem (AAS) for the MMWS. The resultant HCI guidelines are being applied in the design of a fully interactive AAS prototype. An overview of key findings from the literature review, a proposed design methodology with illustrative examples, and an assessment of progress made in implementing the HCI designers guide are presented.

  20. Spotsizer: High-throughput quantitative analysis of microbial growth.

    PubMed

    Bischof, Leanne; Převorovský, Martin; Rallis, Charalampos; Jeffares, Daniel C; Arzhaeva, Yulia; Bähler, Jürg

    2016-10-01

    Microbial colony growth can serve as a useful readout in assays for studying complex genetic interactions or the effects of chemical compounds. Although computational tools for acquiring quantitative measurements of microbial colonies have been developed, their utility can be compromised by inflexible input image requirements, non-trivial installation procedures, or complicated operation. Here, we present the Spotsizer software tool for automated colony size measurements in images of robotically arrayed microbial colonies. Spotsizer features a convenient graphical user interface (GUI), has both single-image and batch-processing capabilities, and works with multiple input image formats and different colony grid types. We demonstrate how Spotsizer can be used for high-throughput quantitative analysis of fission yeast growth. The user-friendly Spotsizer tool provides rapid, accurate, and robust quantitative analyses of microbial growth in a high-throughput format. Spotsizer is freely available at https://data.csiro.au/dap/landingpage?pid=csiro:15330 under a proprietary CSIRO license.

  1. Self-organizing neural integration of pose-motion features for human action recognition

    PubMed Central

    Parisi, German I.; Weber, Cornelius; Wermter, Stefan

    2015-01-01

    The visual recognition of complex, articulated human movements is fundamental for a wide range of artificial systems oriented toward human-robot communication, action classification, and action-driven perception. These challenging tasks may generally involve the processing of a huge amount of visual information and learning-based mechanisms for generalizing a set of training actions and classifying new samples. To operate in natural environments, a crucial property is the efficient and robust recognition of actions, also under noisy conditions caused by, for instance, systematic sensor errors and temporarily occluded persons. Studies of the mammalian visual system and its outperforming ability to process biological motion information suggest separate neural pathways for the distinct processing of pose and motion features at multiple levels and the subsequent integration of these visual cues for action perception. We present a neurobiologically-motivated approach to achieve noise-tolerant action recognition in real time. Our model consists of self-organizing Growing When Required (GWR) networks that obtain progressively generalized representations of sensory inputs and learn inherent spatio-temporal dependencies. During the training, the GWR networks dynamically change their topological structure to better match the input space. We first extract pose and motion features from video sequences and then cluster actions in terms of prototypical pose-motion trajectories. Multi-cue trajectories from matching action frames are subsequently combined to provide action dynamics in the joint feature space. Reported experiments show that our approach outperforms previous results on a dataset of full-body actions captured with a depth sensor, and ranks among the best results for a public benchmark of domestic daily actions. PMID:26106323

  2. Input Decimated Ensembles

    NASA Technical Reports Server (NTRS)

    Tumer, Kagan; Oza, Nikunj C.; Clancy, Daniel (Technical Monitor)

    2001-01-01

    Using an ensemble of classifiers instead of a single classifier has been shown to improve generalization performance in many pattern recognition problems. However, the extent of such improvement depends greatly on the amount of correlation among the errors of the base classifiers. Therefore, reducing those correlations while keeping the classifiers' performance levels high is an important area of research. In this article, we explore input decimation (ID), a method which selects feature subsets for their ability to discriminate among the classes and uses them to decouple the base classifiers. We provide a summary of the theoretical benefits of correlation reduction, along with results of our method on two underwater sonar data sets, three benchmarks from the Probenl/UCI repositories, and two synthetic data sets. The results indicate that input decimated ensembles (IDEs) outperform ensembles whose base classifiers use all the input features; randomly selected subsets of features; and features created using principal components analysis, on a wide range of domains.

  3. Self-organizing feature maps for dynamic control of radio resources in CDMA microcellular networks

    NASA Astrophysics Data System (ADS)

    Hortos, William S.

    1998-03-01

    The application of artificial neural networks to the channel assignment problem for cellular code-division multiple access (CDMA) cellular networks has previously been investigated. CDMA takes advantage of voice activity and spatial isolation because its capacity is only interference limited, unlike time-division multiple access (TDMA) and frequency-division multiple access (FDMA) where capacities are bandwidth-limited. Any reduction in interference in CDMA translates linearly into increased capacity. To satisfy the high demands for new services and improved connectivity for mobile communications, microcellular and picocellular systems are being introduced. For these systems, there is a need to develop robust and efficient management procedures for the allocation of power and spectrum to maximize radio capacity. Topology-conserving mappings play an important role in the biological processing of sensory inputs. The same principles underlying Kohonen's self-organizing feature maps (SOFMs) are applied to the adaptive control of radio resources to minimize interference, hence, maximize capacity in direct-sequence (DS) CDMA networks. The approach based on SOFMs is applied to some published examples of both theoretical and empirical models of DS/CDMA microcellular networks in metropolitan areas. The results of the approach for these examples are informally compared to the performance of algorithms, based on Hopfield- Tank neural networks and on genetic algorithms, for the channel assignment problem.

  4. A multiple-shape memory polymer-metal composite actuator capable of programmable control, creating complex 3D motion of bending, twisting, and oscillation

    NASA Astrophysics Data System (ADS)

    Shen, Qi; Trabia, Sarah; Stalbaum, Tyler; Palmre, Viljar; Kim, Kwang; Oh, Il-Kwon

    2016-04-01

    Development of biomimetic actuators has been an essential motivation in the study of smart materials. However, few materials are capable of controlling complex twisting and bending deformations simultaneously or separately using a dynamic control system. Here, we report an ionic polymer-metal composite actuator having multiple-shape memory effect, and is able to perform complex motion by two external inputs, electrical and thermal. Prior to the development of this type of actuator, this capability only could be realized with existing actuator technologies by using multiple actuators or another robotic system. This paper introduces a soft multiple-shape-memory polymer-metal composite (MSMPMC) actuator having multiple degrees-of-freedom that demonstrates high maneuverability when controlled by two external inputs, electrical and thermal. These multiple inputs allow for complex motions that are routine in nature, but that would be otherwise difficult to obtain with a single actuator. To the best of the authors’ knowledge, this MSMPMC actuator is the first solitary actuator capable of multiple-input control and the resulting deformability and maneuverability.

  5. A multiple-shape memory polymer-metal composite actuator capable of programmable control, creating complex 3D motion of bending, twisting, and oscillation

    PubMed Central

    Shen, Qi; Trabia, Sarah; Stalbaum, Tyler; Palmre, Viljar; Kim, Kwang; Oh, Il-Kwon

    2016-01-01

    Development of biomimetic actuators has been an essential motivation in the study of smart materials. However, few materials are capable of controlling complex twisting and bending deformations simultaneously or separately using a dynamic control system. Here, we report an ionic polymer-metal composite actuator having multiple-shape memory effect, and is able to perform complex motion by two external inputs, electrical and thermal. Prior to the development of this type of actuator, this capability only could be realized with existing actuator technologies by using multiple actuators or another robotic system. This paper introduces a soft multiple-shape-memory polymer-metal composite (MSMPMC) actuator having multiple degrees-of-freedom that demonstrates high maneuverability when controlled by two external inputs, electrical and thermal. These multiple inputs allow for complex motions that are routine in nature, but that would be otherwise difficult to obtain with a single actuator. To the best of the authors’ knowledge, this MSMPMC actuator is the first solitary actuator capable of multiple-input control and the resulting deformability and maneuverability. PMID:27080134

  6. A multiple-shape memory polymer-metal composite actuator capable of programmable control, creating complex 3D motion of bending, twisting, and oscillation.

    PubMed

    Shen, Qi; Trabia, Sarah; Stalbaum, Tyler; Palmre, Viljar; Kim, Kwang; Oh, Il-Kwon

    2016-04-15

    Development of biomimetic actuators has been an essential motivation in the study of smart materials. However, few materials are capable of controlling complex twisting and bending deformations simultaneously or separately using a dynamic control system. Here, we report an ionic polymer-metal composite actuator having multiple-shape memory effect, and is able to perform complex motion by two external inputs, electrical and thermal. Prior to the development of this type of actuator, this capability only could be realized with existing actuator technologies by using multiple actuators or another robotic system. This paper introduces a soft multiple-shape-memory polymer-metal composite (MSMPMC) actuator having multiple degrees-of-freedom that demonstrates high maneuverability when controlled by two external inputs, electrical and thermal. These multiple inputs allow for complex motions that are routine in nature, but that would be otherwise difficult to obtain with a single actuator. To the best of the authors' knowledge, this MSMPMC actuator is the first solitary actuator capable of multiple-input control and the resulting deformability and maneuverability.

  7. Microwave Frequency Multiplier

    NASA Astrophysics Data System (ADS)

    Velazco, J. E.

    2017-02-01

    High-power microwave radiation is used in the Deep Space Network (DSN) and Goldstone Solar System Radar (GSSR) for uplink communications with spacecraft and for monitoring asteroids and space debris, respectively. Intense X-band (7.1 to 8.6 GHz) microwave signals are produced for these applications via klystron and traveling-wave microwave vacuum tubes. In order to achieve higher data rate communications with spacecraft, the DSN is planning to gradually furnish several of its deep space stations with uplink systems that employ Ka-band (34-GHz) radiation. Also, the next generation of planetary radar, such as Ka-Band Objects Observation and Monitoring (KaBOOM), is considering frequencies in the Ka-band range (34 to 36 GHz) in order to achieve higher target resolution. Current commercial Ka-band sources are limited to power levels that range from hundreds of watts up to a kilowatt and, at the high-power end, tend to suffer from poor reliability. In either case, there is a clear need for stable Ka-band sources that can produce kilowatts of power with high reliability. In this article, we present a new concept for high-power, high-frequency generation (including Ka-band) that we refer to as the microwave frequency multiplier (MFM). The MFM is a two-cavity vacuum tube concept where low-frequency (2 to 8 GHz) power is fed into the input cavity to modulate and accelerate an electron beam. In the second cavity, the modulated electron beam excites and amplifies high-power microwaves at a frequency that is a multiple integer of the input cavity's frequency. Frequency multiplication factors in the 4 to 10 range are being considered for the current application, although higher multiplication factors are feasible. This novel beam-wave interaction allows the MFM to produce high-power, high-frequency radiation with high efficiency. A key feature of the MFM is that it uses significantly larger cavities than its klystron counterparts, thus greatly reducing power density and arcing concerns. We present a theoretical analysis for the beam-wave interactions in the MFM's input and output cavities. We show the conditions required for successful frequency multiplication inside the output cavity. Computer simulations using the plasma physics code MAGIC show that 100 kW of Ka-band (32-GHz) output power can be produced using an 80-kW X-band (8-GHz) signal at the MFM's input. The associated MFM efficiency - from beam power to Ka-band power - is 83 percent. Thus, the overall klystron-MFM efficiency is 42 percent - assuming that a klystron with an efficiency of 50 percent delivers the input signal.

  8. The extraction of simple relationships in growth factor-specific multiple-input and multiple-output systems in cell-fate decisions by backward elimination PLS regression.

    PubMed

    Akimoto, Yuki; Yugi, Katsuyuki; Uda, Shinsuke; Kudo, Takamasa; Komori, Yasunori; Kubota, Hiroyuki; Kuroda, Shinya

    2013-01-01

    Cells use common signaling molecules for the selective control of downstream gene expression and cell-fate decisions. The relationship between signaling molecules and downstream gene expression and cellular phenotypes is a multiple-input and multiple-output (MIMO) system and is difficult to understand due to its complexity. For example, it has been reported that, in PC12 cells, different types of growth factors activate MAP kinases (MAPKs) including ERK, JNK, and p38, and CREB, for selective protein expression of immediate early genes (IEGs) such as c-FOS, c-JUN, EGR1, JUNB, and FOSB, leading to cell differentiation, proliferation and cell death; however, how multiple-inputs such as MAPKs and CREB regulate multiple-outputs such as expression of the IEGs and cellular phenotypes remains unclear. To address this issue, we employed a statistical method called partial least squares (PLS) regression, which involves a reduction of the dimensionality of the inputs and outputs into latent variables and a linear regression between these latent variables. We measured 1,200 data points for MAPKs and CREB as the inputs and 1,900 data points for IEGs and cellular phenotypes as the outputs, and we constructed the PLS model from these data. The PLS model highlighted the complexity of the MIMO system and growth factor-specific input-output relationships of cell-fate decisions in PC12 cells. Furthermore, to reduce the complexity, we applied a backward elimination method to the PLS regression, in which 60 input variables were reduced to 5 variables, including the phosphorylation of ERK at 10 min, CREB at 5 min and 60 min, AKT at 5 min and JNK at 30 min. The simple PLS model with only 5 input variables demonstrated a predictive ability comparable to that of the full PLS model. The 5 input variables effectively extracted the growth factor-specific simple relationships within the MIMO system in cell-fate decisions in PC12 cells.

  9. Surprise! Infants consider possible bases of generalization for a single input example.

    PubMed

    Gerken, LouAnn; Dawson, Colin; Chatila, Razanne; Tenenbaum, Josh

    2015-01-01

    Infants have been shown to generalize from a small number of input examples. However, existing studies allow two possible means of generalization. One is via a process of noting similarities shared by several examples. Alternatively, generalization may reflect an implicit desire to explain the input. The latter view suggests that generalization might occur when even a single input example is surprising, given the learner's current model of the domain. To test the possibility that infants are able to generalize based on a single example, we familiarized 9-month-olds with a single three-syllable input example that contained either one surprising feature (syllable repetition, Experiment 1) or two features (repetition and a rare syllable, Experiment 2). In both experiments, infants generalized only to new strings that maintained all of the surprising features from familiarization. This research suggests that surprise can promote very rapid generalization. © 2014 John Wiley & Sons Ltd.

  10. Systems and methods for predicting materials properties

    DOEpatents

    Ceder, Gerbrand; Fischer, Chris; Tibbetts, Kevin; Morgan, Dane; Curtarolo, Stefano

    2007-11-06

    Systems and methods for predicting features of materials of interest. Reference data are analyzed to deduce relationships between the input data sets and output data sets. Reference data includes measured values and/or computed values. The deduced relationships can be specified as equations, correspondences, and/or algorithmic processes that produce appropriate output data when suitable input data is used. In some instances, the output data set is a subset of the input data set, and computational results may be refined by optionally iterating the computational procedure. To deduce features of a new material of interest, a computed or measured input property of the material is provided to an equation, correspondence, or algorithmic procedure previously deduced, and an output is obtained. In some instances, the output is iteratively refined. In some instances, new features deduced for the material of interest are added to a database of input and output data for known materials.

  11. Land-use planning for nearshore ecosystem services—the Puget Sound Ecosystem Portfolio Model

    USGS Publications Warehouse

    Byrd, Kristin

    2011-01-01

    The 2,500 miles of shoreline and nearshore areas of Puget Sound, Washington, provide multiple benefits to people—"ecosystem services"—including important fishing, shellfishing, and recreation industries. To help resource managers plan for expected growth in coming decades, the U.S. Geological Survey Western Geographic Science Center has developed the Puget Sound Ecosystem Portfolio Model (PSEPM). Scenarios of urban growth and shoreline modifications serve as model inputs to develop alternative futures of important nearshore features such as water quality and beach habitats. Model results will support regional long-term planning decisions for the Puget Sound region.

  12. Timber sale planning and analysis system: A user`s guide to the TSPAS sale program. Forest Service general technical report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schuster, E.G.; Jones, J.G.; Meacham, M.L.

    1995-08-01

    Presents a guide to operation and interpretation of TSPAS Sale Program (TSPAS SP), a menu-driven computer program that is one of two programs in the Timber Sale Planning and Analysis System. TSPAS SP is intended to help field teams design and evaluate timber sale alternatives. TSPAS SP evaluate current and long-term timber implications along with associated nontimber outputs. Features include multiple entries and products, real value change, and graphical input. Guide includes user instructions, a glossary, a listing of data needs, and an explanation of error messages.

  13. Innovations: clinical computing: an audio computer-assisted self-interviewing system for research and screening in public mental health settings.

    PubMed

    Bertollo, David N; Alexander, Mary Jane; Shinn, Marybeth; Aybar, Jalila B

    2007-06-01

    This column describes the nonproprietary software Talker, used to adapt screening instruments to audio computer-assisted self-interviewing (ACASI) systems for low-literacy populations and other populations. Talker supports ease of programming, multiple languages, on-site scoring, and the ability to update a central research database. Key features include highly readable text display, audio presentation of questions and audio prompting of answers, and optional touch screen input. The scripting language for adapting instruments is briefly described as well as two studies in which respondents provided positive feedback on its use.

  14. MultiDK: A Multiple Descriptor Multiple Kernel Approach for Molecular Discovery and Its Application to Organic Flow Battery Electrolytes.

    PubMed

    Kim, Sungjin; Jinich, Adrián; Aspuru-Guzik, Alán

    2017-04-24

    We propose a multiple descriptor multiple kernel (MultiDK) method for efficient molecular discovery using machine learning. We show that the MultiDK method improves both the speed and accuracy of molecular property prediction. We apply the method to the discovery of electrolyte molecules for aqueous redox flow batteries. Using multiple-type-as opposed to single-type-descriptors, we obtain more relevant features for machine learning. Following the principle of "wisdom of the crowds", the combination of multiple-type descriptors significantly boosts prediction performance. Moreover, by employing multiple kernels-more than one kernel function for a set of the input descriptors-MultiDK exploits nonlinear relations between molecular structure and properties better than a linear regression approach. The multiple kernels consist of a Tanimoto similarity kernel and a linear kernel for a set of binary descriptors and a set of nonbinary descriptors, respectively. Using MultiDK, we achieve an average performance of r 2 = 0.92 with a test set of molecules for solubility prediction. We also extend MultiDK to predict pH-dependent solubility and apply it to a set of quinone molecules with different ionizable functional groups to assess their performance as flow battery electrolytes.

  15. The emergence of polychronization and feature binding in a spiking neural network model of the primate ventral visual system.

    PubMed

    Eguchi, Akihiro; Isbister, James B; Ahmad, Nasir; Stringer, Simon

    2018-07-01

    We present a hierarchical neural network model, in which subpopulations of neurons develop fixed and regularly repeating temporal chains of spikes (polychronization), which respond specifically to randomized Poisson spike trains representing the input training images. The performance is improved by including top-down and lateral synaptic connections, as well as introducing multiple synaptic contacts between each pair of pre- and postsynaptic neurons, with different synaptic contacts having different axonal delays. Spike-timing-dependent plasticity thus allows the model to select the most effective axonal transmission delay between neurons. Furthermore, neurons representing the binding relationship between low-level and high-level visual features emerge through visually guided learning. This begins to provide a way forward to solving the classic feature binding problem in visual neuroscience and leads to a new hypothesis concerning how information about visual features at every spatial scale may be projected upward through successive neuronal layers. We name this hypothetical upward projection of information the "holographic principle." (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  16. Constraint programming based biomarker optimization.

    PubMed

    Zhou, Manli; Luo, Youxi; Sun, Guoquan; Mai, Guoqin; Zhou, Fengfeng

    2015-01-01

    Efficient and intuitive characterization of biological big data is becoming a major challenge for modern bio-OMIC based scientists. Interactive visualization and exploration of big data is proven to be one of the successful solutions. Most of the existing feature selection algorithms do not allow the interactive inputs from users in the optimizing process of feature selection. This study investigates this question as fixing a few user-input features in the finally selected feature subset and formulates these user-input features as constraints for a programming model. The proposed algorithm, fsCoP (feature selection based on constrained programming), performs well similar to or much better than the existing feature selection algorithms, even with the constraints from both literature and the existing algorithms. An fsCoP biomarker may be intriguing for further wet lab validation, since it satisfies both the classification optimization function and the biomedical knowledge. fsCoP may also be used for the interactive exploration of bio-OMIC big data by interactively adding user-defined constraints for modeling.

  17. Identification of input variables for feature based artificial neural networks-saccade detection in EOG recordings.

    PubMed

    Tigges, P; Kathmann, N; Engel, R R

    1997-07-01

    Though artificial neural networks (ANN) are excellent tools for pattern recognition problems when signal to noise ratio is low, the identification of decision relevant features for ANN input data is still a crucial issue. The experience of the ANN designer and the existing knowledge and understanding of the problem seem to be the only links for a specific construction. In the present study a backpropagation ANN based on modified raw data inputs showed encouraging results. Investigating the specific influences of prototypical input patterns on a specially designed ANN led to a new sparse and efficient input data presentation. This data coding obtained by a semiautomatic procedure combining existing expert knowledge and the internal representation structures of the raw data based ANN yielded a list of feature vectors, each representing the relevant information for saccade identification. The feature based ANN produced a reduction of the error rate of nearly 40% compared with the raw data ANN. An overall correct classification of 92% of so far unknown data was realized. The proposed method of extracting internal ANN knowledge for the production of a better input data representation is not restricted to EOG recordings, and could be used in various fields of signal analysis.

  18. Multitask saliency detection model for synthetic aperture radar (SAR) image and its application in SAR and optical image fusion

    NASA Astrophysics Data System (ADS)

    Liu, Chunhui; Zhang, Duona; Zhao, Xintao

    2018-03-01

    Saliency detection in synthetic aperture radar (SAR) images is a difficult problem. This paper proposed a multitask saliency detection (MSD) model for the saliency detection task of SAR images. We extract four features of the SAR image, which include the intensity, orientation, uniqueness, and global contrast, as the input of the MSD model. The saliency map is generated by the multitask sparsity pursuit, which integrates the multiple features collaboratively. Detection of different scale features is also taken into consideration. Subjective and objective evaluation of the MSD model verifies its effectiveness. Based on the saliency maps obtained by the MSD model, we apply the saliency map of the SAR image to the SAR and color optical image fusion. The experimental results of real data show that the saliency map obtained by the MSD model helps to improve the fusion effect, and the salient areas in the SAR image can be highlighted in the fusion results.

  19. Prevalence and multiplicity of cutaneous beta papilloma viruses in plucked hairs depend on cellular DNA input.

    PubMed

    Weissenborn, S J; Neale, R; de Koning, M N C; Waterboer, T; Abeni, D; Bouwes Bavinck, J N; Wieland, U; Pfister, H J

    2009-11-01

    In view of the low loads of beta human papillomaviruses in skin samples, amounts of cellular DNA used in qualitative PCR may become limiting for virus detection and introduce variations in prevalence and multiplicity. This issue was explored within the context of a multicentre study and increasing prevalence and multiplicity was found with increasing input amounts of cellular DNA extracted from hair bulbs. To improve the quality and comparability between different epidemiologic studies ideally equal amounts of cellular DNA should be employed. When cellular DNA input varies this should be clearly taken into account in assessing viral prevalence and multiplicity.

  20. Analysis and Simulation of Disadvantaged Receivers for Multiple-Input Multiple-Output Communications Systems

    DTIC Science & Technology

    2010-09-01

    SIMULATION OF DISADVANTAGED RECEIVERS FOR MULTIPLE-INPUT MULTIPLE- OUTPUT COMMUNICATIONS SYSTEMS by Tracy A. Martin September 2010 Thesis...DATE September 2010 3. REPORT TYPE AND DATES COVERED Master’s Thesis 4. TITLE AND SUBTITLE Analysis and Simulation of Disadvantaged Receivers...Channel State Information at the Transmitter (CSIT). A disadvantaged receiver is subsequently introduced to the system lacking the optimization enjoyed

  1. Automated Glacier Surface Velocity using Multi-Image/Multi-Chip (MIMC) Feature Tracking

    NASA Astrophysics Data System (ADS)

    Ahn, Y.; Howat, I. M.

    2009-12-01

    Remote sensing from space has enabled effective monitoring of remote and inhospitable polar regions. Glacier velocity, and its variation in time, is one of the most important parameters needed to understand glacier dynamics, glacier mass balance and contribution to sea level rise. Regular measurements of ice velocity are possible from large and accessible satellite data set archives, such as ASTER and LANDSAT-7. Among satellite imagery, optical imagery (i.e. passive, visible to near-infrared band sensors) provides abundant data with optimal spatial resolution and repeat interval for tracking glacier motion at high temporal resolution. Due to massive amounts of data, computation of ice velocity from feature tracking requires 1) user-friendly interface, 2) minimum local/user parameter inputs and 3) results that need minimum editing. We focus on robust feature tracking, applicable to all currently available optical satellite imagery, that is ASTER, SPOT and LANDSAT etc. We introduce the MIMC (multiple images/multiple chip sizes) matching approach that does not involve any user defined local/empirical parameters except approximate average glacier speed. We also introduce a method for extracting velocity from LANDSAT-7 SLC-off data, which has 22 percent of scene data missing in slanted strips due to failure of the scan line corrector. We apply our approach to major outlet glaciers in west/east Greenland and assess our MIMC feature tracking technique by comparison with conventional correlation matching and other methods (e.g. InSAR).

  2. An Extended Normalization Model of Attention Accounts for Feature-Based Attentional Enhancement of Both Response and Coherence Gain

    PubMed Central

    Krishna, B. Suresh; Treue, Stefan

    2016-01-01

    Paying attention to a sensory feature improves its perception and impairs that of others. Recent work has shown that a Normalization Model of Attention (NMoA) can account for a wide range of physiological findings and the influence of different attentional manipulations on visual performance. A key prediction of the NMoA is that attention to a visual feature like an orientation or a motion direction will increase the response of neurons preferring the attended feature (response gain) rather than increase the sensory input strength of the attended stimulus (input gain). This effect of feature-based attention on neuronal responses should translate to similar patterns of improvement in behavioral performance, with psychometric functions showing response gain rather than input gain when attention is directed to the task-relevant feature. In contrast, we report here that when human subjects are cued to attend to one of two motion directions in a transparent motion display, attentional effects manifest as a combination of input and response gain. Further, the impact on input gain is greater when attention is directed towards a narrow range of motion directions than when it is directed towards a broad range. These results are captured by an extended NMoA, which either includes a stimulus-independent attentional contribution to normalization or utilizes direction-tuned normalization. The proposed extensions are consistent with the feature-similarity gain model of attention and the attentional modulation in extrastriate area MT, where neuronal responses are enhanced and suppressed by attention to preferred and non-preferred motion directions respectively. PMID:27977679

  3. Contribution of intrinsic properties and synaptic inputs to motoneuron discharge patterns: a simulation study

    PubMed Central

    ElBasiouny, Sherif M.; Rymer, W. Zev; Heckman, C. J.

    2012-01-01

    Motoneuron discharge patterns reflect the interaction of synaptic inputs with intrinsic conductances. Recent work has focused on the contribution of conductances mediating persistent inward currents (PICs), which amplify and prolong the effects of synaptic inputs on motoneuron discharge. Certain features of human motor unit discharge are thought to reflect a relatively stereotyped activation of PICs by excitatory synaptic inputs; these features include rate saturation and de-recruitment at a lower level of net excitation than that required for recruitment. However, PIC activation is also influenced by the pattern and spatial distribution of inhibitory inputs that are activated concurrently with excitatory inputs. To estimate the potential contributions of PIC activation and synaptic input patterns to motor unit discharge patterns, we examined the responses of a set of cable motoneuron models to different patterns of excitatory and inhibitory inputs. The models were first tuned to approximate the current- and voltage-clamp responses of low- and medium-threshold spinal motoneurons studied in decerebrate cats and then driven with different patterns of excitatory and inhibitory inputs. The responses of the models to excitatory inputs reproduced a number of features of human motor unit discharge. However, the pattern of rate modulation was strongly influenced by the temporal and spatial pattern of concurrent inhibitory inputs. Thus, even though PIC activation is likely to exert a strong influence on firing rate modulation, PIC activation in combination with different patterns of excitatory and inhibitory synaptic inputs can produce a wide variety of motor unit discharge patterns. PMID:22031773

  4. Computational phenotype discovery using unsupervised feature learning over noisy, sparse, and irregular clinical data.

    PubMed

    Lasko, Thomas A; Denny, Joshua C; Levy, Mia A

    2013-01-01

    Inferring precise phenotypic patterns from population-scale clinical data is a core computational task in the development of precision, personalized medicine. The traditional approach uses supervised learning, in which an expert designates which patterns to look for (by specifying the learning task and the class labels), and where to look for them (by specifying the input variables). While appropriate for individual tasks, this approach scales poorly and misses the patterns that we don't think to look for. Unsupervised feature learning overcomes these limitations by identifying patterns (or features) that collectively form a compact and expressive representation of the source data, with no need for expert input or labeled examples. Its rising popularity is driven by new deep learning methods, which have produced high-profile successes on difficult standardized problems of object recognition in images. Here we introduce its use for phenotype discovery in clinical data. This use is challenging because the largest source of clinical data - Electronic Medical Records - typically contains noisy, sparse, and irregularly timed observations, rendering them poor substrates for deep learning methods. Our approach couples dirty clinical data to deep learning architecture via longitudinal probability densities inferred using Gaussian process regression. From episodic, longitudinal sequences of serum uric acid measurements in 4368 individuals we produced continuous phenotypic features that suggest multiple population subtypes, and that accurately distinguished (0.97 AUC) the uric-acid signatures of gout vs. acute leukemia despite not being optimized for the task. The unsupervised features were as accurate as gold-standard features engineered by an expert with complete knowledge of the domain, the classification task, and the class labels. Our findings demonstrate the potential for achieving computational phenotype discovery at population scale. We expect such data-driven phenotypes to expose unknown disease variants and subtypes and to provide rich targets for genetic association studies.

  5. Computational Phenotype Discovery Using Unsupervised Feature Learning over Noisy, Sparse, and Irregular Clinical Data

    PubMed Central

    Lasko, Thomas A.; Denny, Joshua C.; Levy, Mia A.

    2013-01-01

    Inferring precise phenotypic patterns from population-scale clinical data is a core computational task in the development of precision, personalized medicine. The traditional approach uses supervised learning, in which an expert designates which patterns to look for (by specifying the learning task and the class labels), and where to look for them (by specifying the input variables). While appropriate for individual tasks, this approach scales poorly and misses the patterns that we don’t think to look for. Unsupervised feature learning overcomes these limitations by identifying patterns (or features) that collectively form a compact and expressive representation of the source data, with no need for expert input or labeled examples. Its rising popularity is driven by new deep learning methods, which have produced high-profile successes on difficult standardized problems of object recognition in images. Here we introduce its use for phenotype discovery in clinical data. This use is challenging because the largest source of clinical data – Electronic Medical Records – typically contains noisy, sparse, and irregularly timed observations, rendering them poor substrates for deep learning methods. Our approach couples dirty clinical data to deep learning architecture via longitudinal probability densities inferred using Gaussian process regression. From episodic, longitudinal sequences of serum uric acid measurements in 4368 individuals we produced continuous phenotypic features that suggest multiple population subtypes, and that accurately distinguished (0.97 AUC) the uric-acid signatures of gout vs. acute leukemia despite not being optimized for the task. The unsupervised features were as accurate as gold-standard features engineered by an expert with complete knowledge of the domain, the classification task, and the class labels. Our findings demonstrate the potential for achieving computational phenotype discovery at population scale. We expect such data-driven phenotypes to expose unknown disease variants and subtypes and to provide rich targets for genetic association studies. PMID:23826094

  6. The Face-Processing Network Is Resilient to Focal Resection of Human Visual Cortex.

    PubMed

    Weiner, Kevin S; Jonas, Jacques; Gomez, Jesse; Maillard, Louis; Brissart, Hélène; Hossu, Gabriela; Jacques, Corentin; Loftus, David; Colnat-Coulbois, Sophie; Stigliani, Anthony; Barnett, Michael A; Grill-Spector, Kalanit; Rossion, Bruno

    2016-08-10

    Human face perception requires a network of brain regions distributed throughout the occipital and temporal lobes with a right hemisphere advantage. Present theories consider this network as either a processing hierarchy beginning with the inferior occipital gyrus (occipital face area; IOG-faces/OFA) or a multiple-route network with nonhierarchical components. The former predicts that removing IOG-faces/OFA will detrimentally affect downstream stages, whereas the latter does not. We tested this prediction in a human patient (Patient S.P.) requiring removal of the right inferior occipital cortex, including IOG-faces/OFA. We acquired multiple fMRI measurements in Patient S.P. before and after a preplanned surgery and multiple measurements in typical controls, enabling both within-subject/across-session comparisons (Patient S.P. before resection vs Patient S.P. after resection) and between-subject/across-session comparisons (Patient S.P. vs controls). We found that the spatial topology and selectivity of downstream ipsilateral face-selective regions were stable 1 and 8 month(s) after surgery. Additionally, the reliability of distributed patterns of face selectivity in Patient S.P. before versus after resection was not different from across-session reliability in controls. Nevertheless, postoperatively, representations of visual space were typical in dorsal face-selective regions but atypical in ventral face-selective regions and V1 of the resected hemisphere. Diffusion weighted imaging in Patient S.P. and controls identifies white matter tracts connecting retinotopic areas to downstream face-selective regions, which may contribute to the stable and plastic features of the face network in Patient S.P. after surgery. Together, our results support a multiple-route network of face processing with nonhierarchical components and shed light on stable and plastic features of high-level visual cortex following focal brain damage. Brain networks consist of interconnected functional regions commonly organized in processing hierarchies. Prevailing theories predict that damage to the input of the hierarchy will detrimentally affect later stages. We tested this prediction with multiple brain measurements in a rare human patient requiring surgical removal of the putative input to a network processing faces. Surprisingly, the spatial topology and selectivity of downstream face-selective regions are stable after surgery. Nevertheless, representations of visual space were typical in dorsal face-selective regions but atypical in ventral face-selective regions and V1. White matter connections from outside the face network may support these stable and plastic features. As processing hierarchies are ubiquitous in biological and nonbiological systems, our results have pervasive implications for understanding the construction of resilient networks. Copyright © 2016 the authors 0270-6474/16/368426-16$15.00/0.

  7. High- and low-risk profiles for the development of multiple sclerosis within 10 years after optic neuritis: experience of the optic neuritis treatment trial.

    PubMed

    Beck, Roy W; Trobe, Jonathan D; Moke, Pamela S; Gal, Robin L; Xing, Dongyuan; Bhatti, M Tariq; Brodsky, Michael C; Buckley, Edward G; Chrousos, Georgia A; Corbett, James; Eggenberger, Eric; Goodwin, James A; Katz, Barrett; Kaufman, David I; Keltner, John L; Kupersmith, Mark J; Miller, Neil R; Nazarian, Sarkis; Orengo-Nania, Silvia; Savino, Peter J; Shults, William T; Smith, Craig H; Wall, Michael

    2003-07-01

    To identify factors associated with a high and low risk of developing multiple sclerosis after an initial episode of optic neuritis. Three hundred eighty-eight patients who experienced acute optic neuritis between July 1, 1988, and June 30, 1991, were followed up prospectively for the development of multiple sclerosis. Consenting patients were reassessed after 10 to 13 years. The 10-year risk of multiple sclerosis was 38% (95% confidence interval, 33%-43%). Patients (160) who had 1 or more typical lesions on the baseline magnetic resonance imaging (MRI) scan of the brain had a 56% risk; those with no lesions (191) had a 22% risk (P<.001, log rank test). Among the patients who had no lesions on MRI, male gender and optic disc swelling were associated with a lower risk of multiple sclerosis, as was the presence of the following atypical features for optic neuritis: no light perception vision; absence of pain; and ophthalmoscopic findings of severe optic disc edema, peripapillary hemorrhages, or retinal exudates. The 10-year risk of multiple sclerosis following an initial episode of acute optic neuritis is significantly higher if there is a single brain MRI lesion; higher numbers of lesions do not appreciably increase that risk. However, even when brain lesions are seen on MRI, more than 40% of the patients will not develop clinical multiple sclerosis after 10 years. In the absence of MRI lesions, certain demographic and clinical features seem to predict a very low likelihood of developing multiple sclerosis. This natural history information is a critical input for estimating a patient's 10-year multiple sclerosis risk and for weighing the benefit of initiating prophylactic treatment at the time of optic neuritis or other initial demyelinating events in the central nervous system.

  8. Skin lesion computational diagnosis of dermoscopic images: Ensemble models based on input feature manipulation.

    PubMed

    Oliveira, Roberta B; Pereira, Aledir S; Tavares, João Manuel R S

    2017-10-01

    The number of deaths worldwide due to melanoma has risen in recent times, in part because melanoma is the most aggressive type of skin cancer. Computational systems have been developed to assist dermatologists in early diagnosis of skin cancer, or even to monitor skin lesions. However, there still remains a challenge to improve classifiers for the diagnosis of such skin lesions. The main objective of this article is to evaluate different ensemble classification models based on input feature manipulation to diagnose skin lesions. Input feature manipulation processes are based on feature subset selections from shape properties, colour variation and texture analysis to generate diversity for the ensemble models. Three subset selection models are presented here: (1) a subset selection model based on specific feature groups, (2) a correlation-based subset selection model, and (3) a subset selection model based on feature selection algorithms. Each ensemble classification model is generated using an optimum-path forest classifier and integrated with a majority voting strategy. The proposed models were applied on a set of 1104 dermoscopic images using a cross-validation procedure. The best results were obtained by the first ensemble classification model that generates a feature subset ensemble based on specific feature groups. The skin lesion diagnosis computational system achieved 94.3% accuracy, 91.8% sensitivity and 96.7% specificity. The input feature manipulation process based on specific feature subsets generated the greatest diversity for the ensemble classification model with very promising results. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. Parallel computing for automated model calibration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burke, John S.; Danielson, Gary R.; Schulz, Douglas A.

    2002-07-29

    Natural resources model calibration is a significant burden on computing and staff resources in modeling efforts. Most assessments must consider multiple calibration objectives (for example magnitude and timing of stream flow peak). An automated calibration process that allows real time updating of data/models, allowing scientists to focus effort on improving models is needed. We are in the process of building a fully featured multi objective calibration tool capable of processing multiple models cheaply and efficiently using null cycle computing. Our parallel processing and calibration software routines have been generically, but our focus has been on natural resources model calibration. Somore » far, the natural resources models have been friendly to parallel calibration efforts in that they require no inter-process communication, only need a small amount of input data and only output a small amount of statistical information for each calibration run. A typical auto calibration run might involve running a model 10,000 times with a variety of input parameters and summary statistical output. In the past model calibration has been done against individual models for each data set. The individual model runs are relatively fast, ranging from seconds to minutes. The process was run on a single computer using a simple iterative process. We have completed two Auto Calibration prototypes and are currently designing a more feature rich tool. Our prototypes have focused on running the calibration in a distributed computing cross platform environment. They allow incorporation of?smart? calibration parameter generation (using artificial intelligence processing techniques). Null cycle computing similar to SETI@Home has also been a focus of our efforts. This paper details the design of the latest prototype and discusses our plans for the next revision of the software.« less

  10. Rational construction of gel-based supramolecular logic gates by using a functional gelator with multiple-stimuli responsive properties.

    PubMed

    Fan, Kaiqi; Yang, Jun; Wang, Xiaobo; Song, Jian

    2014-11-07

    A gelator containing a sorbitol moiety and a naphthalene-based salicylideneaniline group exhibits macroscopic gel-sol behavior in response to four complementary input stimuli: temperature, UV light, OH(-), and Cu(2+). On the basis of its multiple-stimuli responsive properties, we constructed a rational gel-based supramolecular logic gate that performed OR and INH types of reversible stimulus responsive gel-sol transition in the presence of various combinations of the four stimuli when the gel state was defined as an output. Moreover, a combination two-output logic gate was obtained, owing to the existence of the naked eye as an additional output. Hence, gelator 1 could construct not only a basic logic gate, but also a two-input-two-output logic gate because of its response to multiple chemical stimuli and multiple output signals, in which one input could erase the effect of another input.

  11. MUFOLD-SS: New deep inception-inside-inception networks for protein secondary structure prediction.

    PubMed

    Fang, Chao; Shang, Yi; Xu, Dong

    2018-05-01

    Protein secondary structure prediction can provide important information for protein 3D structure prediction and protein functions. Deep learning offers a new opportunity to significantly improve prediction accuracy. In this article, a new deep neural network architecture, named the Deep inception-inside-inception (Deep3I) network, is proposed for protein secondary structure prediction and implemented as a software tool MUFOLD-SS. The input to MUFOLD-SS is a carefully designed feature matrix corresponding to the primary amino acid sequence of a protein, which consists of a rich set of information derived from individual amino acid, as well as the context of the protein sequence. Specifically, the feature matrix is a composition of physio-chemical properties of amino acids, PSI-BLAST profile, and HHBlits profile. MUFOLD-SS is composed of a sequence of nested inception modules and maps the input matrix to either eight states or three states of secondary structures. The architecture of MUFOLD-SS enables effective processing of local and global interactions between amino acids in making accurate prediction. In extensive experiments on multiple datasets, MUFOLD-SS outperformed the best existing methods and other deep neural networks significantly. MUFold-SS can be downloaded from http://dslsrv8.cs.missouri.edu/~cf797/MUFoldSS/download.html. © 2018 Wiley Periodicals, Inc.

  12. Deep PDF parsing to extract features for detecting embedded malware.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Munson, Miles Arthur; Cross, Jesse S.

    2011-09-01

    The number of PDF files with embedded malicious code has risen significantly in the past few years. This is due to the portability of the file format, the ways Adobe Reader recovers from corrupt PDF files, the addition of many multimedia and scripting extensions to the file format, and many format properties the malware author may use to disguise the presence of malware. Current research focuses on executable, MS Office, and HTML formats. In this paper, several features and properties of PDF Files are identified. Features are extracted using an instrumented open source PDF viewer. The feature descriptions of benignmore » and malicious PDFs can be used to construct a machine learning model for detecting possible malware in future PDF files. The detection rate of PDF malware by current antivirus software is very low. A PDF file is easy to edit and manipulate because it is a text format, providing a low barrier to malware authors. Analyzing PDF files for malware is nonetheless difficult because of (a) the complexity of the formatting language, (b) the parsing idiosyncrasies in Adobe Reader, and (c) undocumented correction techniques employed in Adobe Reader. In May 2011, Esparza demonstrated that PDF malware could be hidden from 42 of 43 antivirus packages by combining multiple obfuscation techniques [4]. One reason current antivirus software fails is the ease of varying byte sequences in PDF malware, thereby rendering conventional signature-based virus detection useless. The compression and encryption functions produce sequences of bytes that are each functions of multiple input bytes. As a result, padding the malware payload with some whitespace before compression/encryption can change many of the bytes in the final payload. In this study we analyzed a corpus of 2591 benign and 87 malicious PDF files. While this corpus is admittedly small, it allowed us to test a system for collecting indicators of embedded PDF malware. We will call these indicators features throughout the rest of this report. The features are extracted using an instrumented PDF viewer, and are the inputs to a prediction model that scores the likelihood of a PDF file containing malware. The prediction model is constructed from a sample of labeled data by a machine learning algorithm (specifically, decision tree ensemble learning). Preliminary experiments show that the model is able to detect half of the PDF malware in the corpus with zero false alarms. We conclude the report with suggestions for extending this work to detect a greater variety of PDF malware.« less

  13. Feature space trajectory for distorted-object classification and pose estimation in synthetic aperture radar

    NASA Astrophysics Data System (ADS)

    Casasent, David P.; Shenoy, Rajesh

    1997-10-01

    Classification and pose estimation of distorted input objects are considered. The feature space trajectory representation of distorted views of an object is used with a new eigenfeature space. For a distorted input object, the closest trajectory denotes the class of the input and the closest line segment on it denotes its pose. If an input point is too far from a trajectory, it is rejected as clutter. New methods for selecting Fukunaga-Koontz discriminant vectors, the number of dominant eigenvectors per class and for determining training, and test set compatibility are presented.

  14. Robotics control using isolated word recognition of voice input

    NASA Technical Reports Server (NTRS)

    Weiner, J. M.

    1977-01-01

    A speech input/output system is presented that can be used to communicate with a task oriented system. Human speech commands and synthesized voice output extend conventional information exchange capabilities between man and machine by utilizing audio input and output channels. The speech input facility is comprised of a hardware feature extractor and a microprocessor implemented isolated word or phrase recognition system. The recognizer offers a medium sized (100 commands), syntactically constrained vocabulary, and exhibits close to real time performance. The major portion of the recognition processing required is accomplished through software, minimizing the complexity of the hardware feature extractor.

  15. T-ray relevant frequencies for osteosarcoma classification

    NASA Astrophysics Data System (ADS)

    Withayachumnankul, W.; Ferguson, B.; Rainsford, T.; Findlay, D.; Mickan, S. P.; Abbott, D.

    2006-01-01

    We investigate the classification of the T-ray response of normal human bone cells and human osteosarcoma cells, grown in culture. Given the magnitude and phase responses within a reliable spectral range as features for input vectors, a trained support vector machine can correctly classify the two cell types to some extent. Performance of the support vector machine is deteriorated by the curse of dimensionality, resulting from the comparatively large number of features in the input vectors. Feature subset selection methods are used to select only an optimal number of relevant features for inputs. As a result, an improvement in generalization performance is attainable, and the selected frequencies can be used for further describing different mechanisms of the cells, responding to T-rays. We demonstrate a consistent classification accuracy of 89.6%, while the only one fifth of the original features are retained in the data set.

  16. MURI: Impact of Oceanographic Variability on Acoustic Communications

    DTIC Science & Technology

    2011-09-01

    multiplexing ( OFDM ), multiple- input/multiple-output ( MIMO ) transmissions, and multi-user single-input/multiple-output (SIMO) communications. Lastly... MIMO - OFDM communications: Receiver design for Doppler distorted underwater acoustic channels,” Proc. Asilomar Conf. on Signals, Systems, and... MIMO ) will be of particular interest. Validating experimental data will be obtained during the ONR acoustic communications experiment in summer 2008

  17. Model, analysis, and evaluation of the effects of analog VLSI arithmetic on linear subspace-based image recognition.

    PubMed

    Carvajal, Gonzalo; Figueroa, Miguel

    2014-07-01

    Typical image recognition systems operate in two stages: feature extraction to reduce the dimensionality of the input space, and classification based on the extracted features. Analog Very Large Scale Integration (VLSI) is an attractive technology to achieve compact and low-power implementations of these computationally intensive tasks for portable embedded devices. However, device mismatch limits the resolution of the circuits fabricated with this technology. Traditional layout techniques to reduce the mismatch aim to increase the resolution at the transistor level, without considering the intended application. Relating mismatch parameters to specific effects in the application level would allow designers to apply focalized mismatch compensation techniques according to predefined performance/cost tradeoffs. This paper models, analyzes, and evaluates the effects of mismatched analog arithmetic in both feature extraction and classification circuits. For the feature extraction, we propose analog adaptive linear combiners with on-chip learning for both Least Mean Square (LMS) and Generalized Hebbian Algorithm (GHA). Using mathematical abstractions of analog circuits, we identify mismatch parameters that are naturally compensated during the learning process, and propose cost-effective guidelines to reduce the effect of the rest. For the classification, we derive analog models for the circuits necessary to implement Nearest Neighbor (NN) approach and Radial Basis Function (RBF) networks, and use them to emulate analog classifiers with standard databases of face and hand-writing digits. Formal analysis and experiments show how we can exploit adaptive structures and properties of the input space to compensate the effects of device mismatch at the application level, thus reducing the design overhead of traditional layout techniques. Results are also directly extensible to multiple application domains using linear subspace methods. Copyright © 2014 Elsevier Ltd. All rights reserved.

  18. Predicting the vibroacoustic response of satellite equipment panels.

    PubMed

    Conlon, S C; Hambric, S A

    2003-03-01

    Modern satellites are constructed of large, lightweight equipment panels that are strongly excited by acoustic pressures during launch. During design, performing vibroacoustic analyses to evaluate and ensure the integrity of the complex electronics mounted on the panels is critical. In this study the attached equipment is explicitly addressed and how its properties affect the panel responses is characterized. FEA and BEA methods are used to derive realistic parameters to input to a SEA hybrid model of a panel with multiple attachments. Specifically, conductance/modal density and radiation efficiency for nonhomogeneous panel structures with and without mass loading are computed. The validity of using the spatially averaged conductance of panels with irregular features for deriving the structure modal density is demonstrated. Maidanik's proposed method of modifying the traditional SEA input power is implemented, illustrating the importance of accounting for system internal couplings when calculating the external input power. The predictions using the SEA hybrid model agree with the measured data trends, and are found to be most sensitive to the assumed dynamic mass ratio (attachments/structure) and the attachment internal loss factor. Additional experimental and analytical investigations are recommended to better characterize dynamic masses, modal densities and loss factors.

  19. Incorporation of Plasticity and Damage Into an Orthotropic Three-Dimensional Model with Tabulated Input Suitable for Use in Composite Impact Problems

    NASA Technical Reports Server (NTRS)

    Goldberg, Robert K.; Carney, Kelly S.; DuBois, Paul; Hoffarth, Canio; Rajan, Subramaniam; Blackenhorn, Gunther

    2015-01-01

    The need for accurate material models to simulate the deformation, damage and failure of polymer matrix composites under impact conditions is becoming critical as these materials are gaining increased usage in the aerospace and automotive industries. While there are several composite material models currently available within commercial transient dynamic finite element codes, several features have been identified as being lacking in the currently available material models that could substantially enhance the predictive capability of the impact simulations. A specific desired feature pertains to the incorporation of both plasticity and damage within the material model. Another desired feature relates to using experimentally based tabulated stress-strain input to define the evolution of plasticity and damage as opposed to specifying discrete input properties (such as modulus and strength) and employing analytical functions to track the response of the material. To begin to address these needs, a combined plasticity and damage model suitable for use with both solid and shell elements is being developed for implementation within the commercial code LS-DYNA. The plasticity model is based on extending the Tsai-Wu composite failure model into a strain-hardening based orthotropic plasticity model with a non-associative flow rule. The evolution of the yield surface is determined based on tabulated stress-strain curves in the various normal and shear directions and is tracked using the effective plastic strain. The effective plastic strain is computed by using the non-associative flow rule in combination with appropriate numerical methods. To compute the evolution of damage, a strain equivalent semi-coupled formulation is used, in which a load in one direction results in a stiffness reduction in multiple coordinate directions. A specific laminated composite is examined to demonstrate the process of characterizing and analyzing the response of a composite using the developed model.

  20. Road and Roadside Feature Extraction Using Imagery and LIDAR Data for Transportation Operation

    NASA Astrophysics Data System (ADS)

    Ural, S.; Shan, J.; Romero, M. A.; Tarko, A.

    2015-03-01

    Transportation agencies require up-to-date, reliable, and feasibly acquired information on road geometry and features within proximity to the roads as input for evaluating and prioritizing new or improvement road projects. The information needed for a robust evaluation of road projects includes road centerline, width, and extent together with the average grade, cross-sections, and obstructions near the travelled way. Remote sensing is equipped with a large collection of data and well-established tools for acquiring the information and extracting aforementioned various road features at various levels and scopes. Even with many remote sensing data and methods available for road extraction, transportation operation requires more than the centerlines. Acquiring information that is spatially coherent at the operational level for the entire road system is challenging and needs multiple data sources to be integrated. In the presented study, we established a framework that used data from multiple sources, including one-foot resolution color infrared orthophotos, airborne LiDAR point clouds, and existing spatially non-accurate ancillary road networks. We were able to extract 90.25% of a total of 23.6 miles of road networks together with estimated road width, average grade along the road, and cross sections at specified intervals. Also, we have extracted buildings and vegetation within a predetermined proximity to the extracted road extent. 90.6% of 107 existing buildings were correctly identified with 31% false detection rate.

  1. Single neuron computation: from dynamical system to feature detector.

    PubMed

    Hong, Sungho; Agüera y Arcas, Blaise; Fairhall, Adrienne L

    2007-12-01

    White noise methods are a powerful tool for characterizing the computation performed by neural systems. These methods allow one to identify the feature or features that a neural system extracts from a complex input and to determine how these features are combined to drive the system's spiking response. These methods have also been applied to characterize the input-output relations of single neurons driven by synaptic inputs, simulated by direct current injection. To interpret the results of white noise analysis of single neurons, we would like to understand how the obtained feature space of a single neuron maps onto the biophysical properties of the membrane, in particular, the dynamics of ion channels. Here, through analysis of a simple dynamical model neuron, we draw explicit connections between the output of a white noise analysis and the underlying dynamical system. We find that under certain assumptions, the form of the relevant features is well defined by the parameters of the dynamical system. Further, we show that under some conditions, the feature space is spanned by the spike-triggered average and its successive order time derivatives.

  2. Inverse Proportional Relationship Between Switching-Time Length and Fractal-Like Structure for Continuous Tracking Movement

    NASA Astrophysics Data System (ADS)

    Hirakawa, Takehito; Suzuki, Hiroo; Gohara, Kazutoshi; Yamamoto, Yuji

    We investigate the relationship between the switching-time length T and the fractal-like feature that characterizes the behavior of dissipative dynamical systems excited by external temporal inputs for tracking movement. Seven healthy right-handed male participants were asked to continuously track light-emitting diodes that were located on the right and left sides in front of them. These movements were performed under two conditions: when the same input pattern was repeated (the periodic-input condition) and when two different input patterns were switched stochastically (the switching-input condition). The repeated time lengths of input patterns during these conditions were 2.00, 1.00, 0.75, 0.50, 0.35, and 0.25s. The movements of a lever held between a participant’s thumb and index finger were measured by a motion-capture system and were analyzed with respect to position and velocity. The condition in which the same input was repeated revealed that two different stable trajectories existed in a cylindrical state space, while the condition in which the inputs were switched induced transitions between these two trajectories. These two different trajectories were considered as excited attractors. The transitions between the two excited attractors produced eight trajectories; they were then characterized by a fractal-like feature as a third-order sequence effect. Moreover, correlation dimensions, which are typically used to evaluate fractal-like features, calculated from the set on the Poincaré section increased as the switching-time length T decreased. These results suggest that an inverse proportional relationship exists between the switching-time length T and the fractal-like feature of human movement.

  3. LightDenseYOLO: A Fast and Accurate Marker Tracker for Autonomous UAV Landing by Visible Light Camera Sensor on Drone.

    PubMed

    Nguyen, Phong Ha; Arsalan, Muhammad; Koo, Ja Hyung; Naqvi, Rizwan Ali; Truong, Noi Quang; Park, Kang Ryoung

    2018-05-24

    Autonomous landing of an unmanned aerial vehicle or a drone is a challenging problem for the robotics research community. Previous researchers have attempted to solve this problem by combining multiple sensors such as global positioning system (GPS) receivers, inertial measurement unit, and multiple camera systems. Although these approaches successfully estimate an unmanned aerial vehicle location during landing, many calibration processes are required to achieve good detection accuracy. In addition, cases where drones operate in heterogeneous areas with no GPS signal should be considered. To overcome these problems, we determined how to safely land a drone in a GPS-denied environment using our remote-marker-based tracking algorithm based on a single visible-light-camera sensor. Instead of using hand-crafted features, our algorithm includes a convolutional neural network named lightDenseYOLO to extract trained features from an input image to predict a marker's location by visible light camera sensor on drone. Experimental results show that our method significantly outperforms state-of-the-art object trackers both using and not using convolutional neural network in terms of both accuracy and processing time.

  4. A multiple distributed representation method based on neural network for biomedical event extraction.

    PubMed

    Wang, Anran; Wang, Jian; Lin, Hongfei; Zhang, Jianhai; Yang, Zhihao; Xu, Kan

    2017-12-20

    Biomedical event extraction is one of the most frontier domains in biomedical research. The two main subtasks of biomedical event extraction are trigger identification and arguments detection which can both be considered as classification problems. However, traditional state-of-the-art methods are based on support vector machine (SVM) with massive manually designed one-hot represented features, which require enormous work but lack semantic relation among words. In this paper, we propose a multiple distributed representation method for biomedical event extraction. The method combines context consisting of dependency-based word embedding, and task-based features represented in a distributed way as the input of deep learning models to train deep learning models. Finally, we used softmax classifier to label the example candidates. The experimental results on Multi-Level Event Extraction (MLEE) corpus show higher F-scores of 77.97% in trigger identification and 58.31% in overall compared to the state-of-the-art SVM method. Our distributed representation method for biomedical event extraction avoids the problems of semantic gap and dimension disaster from traditional one-hot representation methods. The promising results demonstrate that our proposed method is effective for biomedical event extraction.

  5. Neural-adaptive control of single-master-multiple-slaves teleoperation for coordinated multiple mobile manipulators with time-varying communication delays and input uncertainties.

    PubMed

    Li, Zhijun; Su, Chun-Yi

    2013-09-01

    In this paper, adaptive neural network control is investigated for single-master-multiple-slaves teleoperation in consideration of time delays and input dead-zone uncertainties for multiple mobile manipulators carrying a common object in a cooperative manner. Firstly, concise dynamics of teleoperation systems consisting of a single master robot, multiple coordinated slave robots, and the object are developed in the task space. To handle asymmetric time-varying delays in communication channels and unknown asymmetric input dead zones, the nonlinear dynamics of the teleoperation system are transformed into two subsystems through feedback linearization: local master or slave dynamics including the unknown input dead zones and delayed dynamics for the purpose of synchronization. Then, a model reference neural network control strategy based on linear matrix inequalities (LMI) and adaptive techniques is proposed. The developed control approach ensures that the defined tracking errors converge to zero whereas the coordination internal force errors remain bounded and can be made arbitrarily small. Throughout this paper, stability analysis is performed via explicit Lyapunov techniques under specific LMI conditions. The proposed adaptive neural network control scheme is robust against motion disturbances, parametric uncertainties, time-varying delays, and input dead zones, which is validated by simulation studies.

  6. Reconstruction of nonlinear wave propagation

    DOEpatents

    Fleischer, Jason W; Barsi, Christopher; Wan, Wenjie

    2013-04-23

    Disclosed are systems and methods for characterizing a nonlinear propagation environment by numerically propagating a measured output waveform resulting from a known input waveform. The numerical propagation reconstructs the input waveform, and in the process, the nonlinear environment is characterized. In certain embodiments, knowledge of the characterized nonlinear environment facilitates determination of an unknown input based on a measured output. Similarly, knowledge of the characterized nonlinear environment also facilitates formation of a desired output based on a configurable input. In both situations, the input thus characterized and the output thus obtained include features that would normally be lost in linear propagations. Such features can include evanescent waves and peripheral waves, such that an image thus obtained are inherently wide-angle, farfield form of microscopy.

  7. Pilot study of a novel tool for input-free automated identification of transition zone prostate tumors using T2- and diffusion-weighted signal and textural features.

    PubMed

    Stember, Joseph N; Deng, Fang-Ming; Taneja, Samir S; Rosenkrantz, Andrew B

    2014-08-01

    To present results of a pilot study to develop software that identifies regions suspicious for prostate transition zone (TZ) tumor, free of user input. Eight patients with TZ tumors were used to develop the model by training a Naïve Bayes classifier to detect tumors based on selection of most accurate predictors among various signal and textural features on T2-weighted imaging (T2WI) and apparent diffusion coefficient (ADC) maps. Features tested as inputs were: average signal, signal standard deviation, energy, contrast, correlation, homogeneity and entropy (all defined on T2WI); and average ADC. A forward selection scheme was used on the remaining 20% of training set supervoxels to identify important inputs. The trained model was tested on a different set of ten patients, half with TZ tumors. In training cases, the software tiled the TZ with 4 × 4-voxel "supervoxels," 80% of which were used to train the classifier. Each of 100 iterations selected T2WI energy and average ADC, which therefore were deemed the optimal model input. The two-feature model was applied blindly to the separate set of test patients, again without operator input of suspicious foci. The software correctly predicted presence or absence of TZ tumor in all test patients. Furthermore, locations of predicted tumors corresponded spatially with locations of biopsies that had confirmed their presence. Preliminary findings suggest that this tool has potential to accurately predict TZ tumor presence and location, without operator input. © 2013 Wiley Periodicals, Inc.

  8. Adaptive reconfigurable V-BLAST type equalizer for cognitive MIMO-OFDM radios

    NASA Astrophysics Data System (ADS)

    Ozden, Mehmet Tahir

    2015-12-01

    An adaptive channel shortening equalizer design for multiple input multiple output-orthogonal frequency division multiplexing (MIMO-OFDM) radio receivers is considered in this presentation. The proposed receiver has desirable features for cognitive and software defined radio implementations. It consists of two sections: MIMO decision feedback equalizer (MIMO-DFE) and adaptive multiple Viterbi detection. In MIMO-DFE section, a complete modified Gram-Schmidt orthogonalization of multichannel input data is accomplished using sequential processing multichannel Givens lattice stages, so that a Vertical Bell Laboratories Layered Space Time (V-BLAST) type MIMO-DFE is realized at the front-end section of the channel shortening equalizer. Matrix operations, a major bottleneck for receiver operations, are accordingly avoided, and only scalar operations are used. A highly modular and regular radio receiver architecture that has a suitable structure for digital signal processing (DSP) chip and field programable gate array (FPGA) implementations, which are important for software defined radio realizations, is achieved. The MIMO-DFE section of the proposed receiver can also be reconfigured for spectrum sensing and positioning functions, which are important tasks for cognitive radio applications. In connection with adaptive multiple Viterbi detection section, a systolic array implementation for each channel is performed so that a receiver architecture with high computational concurrency is attained. The total computational complexity is given in terms of equalizer and desired response filter lengths, alphabet size, and number of antennas. The performance of the proposed receiver is presented for two-channel case by means of mean squared error (MSE) and probability of error evaluations, which are conducted for time-invariant and time-variant channel conditions, orthogonal and nonorthogonal transmissions, and two different modulation schemes.

  9. Conditional High-Order Boltzmann Machines for Supervised Relation Learning.

    PubMed

    Huang, Yan; Wang, Wei; Wang, Liang; Tan, Tieniu

    2017-09-01

    Relation learning is a fundamental problem in many vision tasks. Recently, high-order Boltzmann machine and its variants have shown their great potentials in learning various types of data relation in a range of tasks. But most of these models are learned in an unsupervised way, i.e., without using relation class labels, which are not very discriminative for some challenging tasks, e.g., face verification. In this paper, with the goal to perform supervised relation learning, we introduce relation class labels into conventional high-order multiplicative interactions with pairwise input samples, and propose a conditional high-order Boltzmann Machine (CHBM), which can learn to classify the data relation in a binary classification way. To be able to deal with more complex data relation, we develop two improved variants of CHBM: 1) latent CHBM, which jointly performs relation feature learning and classification, by using a set of latent variables to block the pathway from pairwise input samples to output relation labels and 2) gated CHBM, which untangles factors of variation in data relation, by exploiting a set of latent variables to multiplicatively gate the classification of CHBM. To reduce the large number of model parameters generated by the multiplicative interactions, we approximately factorize high-order parameter tensors into multiple matrices. Then, we develop efficient supervised learning algorithms, by first pretraining the models using joint likelihood to provide good parameter initialization, and then finetuning them using conditional likelihood to enhance the discriminant ability. We apply the proposed models to a series of tasks including invariant recognition, face verification, and action similarity labeling. Experimental results demonstrate that by exploiting supervised relation labels, our models can greatly improve the performance.

  10. Sinda/Fluint Stratfied Tank Modeling

    NASA Technical Reports Server (NTRS)

    Sakowski, Barbara A.

    2014-01-01

    A general purpose SINDA/FLUINT (S/F) stratified tank model was created and used to simulate the Ksite1 LH2 liquid self-pressurization tests as well as axial jet mixing within the liquid region of the tank. The S/F model employed the use of stratified layers, i.e. S/F lumps, in the vapor ullage as well as in the liquid region. The model was constructed to analyze a general purpose stratified tank that could incorporate the following features: Multiple or singular lumps in the liquid and vapor regions of the tank, Real gases (also mixtures) and compressible liquids, Venting, pressurizing, and draining, Condensation and evaporation/boiling, Wall heat transfer, Elliptical, cylindrical, and spherical tank geometries. Extensive user logic was used to allow for tailoring of the above features to specific cases. Most of the code input for a specific case could be done through the Registers Data Block.

  11. Fusion of Heterogeneous Intrusion Detection Systems for Network Attack Detection

    PubMed Central

    Kaliappan, Jayakumar; Thiagarajan, Revathi; Sundararajan, Karpagam

    2015-01-01

    An intrusion detection system (IDS) helps to identify different types of attacks in general, and the detection rate will be higher for some specific category of attacks. This paper is designed on the idea that each IDS is efficient in detecting a specific type of attack. In proposed Multiple IDS Unit (MIU), there are five IDS units, and each IDS follows a unique algorithm to detect attacks. The feature selection is done with the help of genetic algorithm. The selected features of the input traffic are passed on to the MIU for processing. The decision from each IDS is termed as local decision. The fusion unit inside the MIU processes all the local decisions with the help of majority voting rule and makes the final decision. The proposed system shows a very good improvement in detection rate and reduces the false alarm rate. PMID:26295058

  12. Fusion of Heterogeneous Intrusion Detection Systems for Network Attack Detection.

    PubMed

    Kaliappan, Jayakumar; Thiagarajan, Revathi; Sundararajan, Karpagam

    2015-01-01

    An intrusion detection system (IDS) helps to identify different types of attacks in general, and the detection rate will be higher for some specific category of attacks. This paper is designed on the idea that each IDS is efficient in detecting a specific type of attack. In proposed Multiple IDS Unit (MIU), there are five IDS units, and each IDS follows a unique algorithm to detect attacks. The feature selection is done with the help of genetic algorithm. The selected features of the input traffic are passed on to the MIU for processing. The decision from each IDS is termed as local decision. The fusion unit inside the MIU processes all the local decisions with the help of majority voting rule and makes the final decision. The proposed system shows a very good improvement in detection rate and reduces the false alarm rate.

  13. A modal parameter extraction procedure applicable to linear time-invariant dynamic systems

    NASA Technical Reports Server (NTRS)

    Kurdila, A. J.; Craig, R. R., Jr.

    1985-01-01

    Modal analysis has emerged as a valuable tool in many phases of the engineering design process. Complex vibration and acoustic problems in new designs can often be remedied through use of the method. Moreover, the technique has been used to enhance the conceptual understanding of structures by serving to verify analytical models. A new modal parameter estimation procedure is presented. The technique is applicable to linear, time-invariant systems and accommodates multiple input excitations. In order to provide a background for the derivation of the method, some modal parameter extraction procedures currently in use are described. Key features implemented in the new technique are elaborated upon.

  14. Research on oral test modeling based on multi-feature fusion

    NASA Astrophysics Data System (ADS)

    Shi, Yuliang; Tao, Yiyue; Lei, Jun

    2018-04-01

    In this paper, the spectrum of speech signal is taken as an input of feature extraction. The advantage of PCNN in image segmentation and other processing is used to process the speech spectrum and extract features. And a new method combining speech signal processing and image processing is explored. At the same time of using the features of the speech map, adding the MFCC to establish the spectral features and integrating them with the features of the spectrogram to further improve the accuracy of the spoken language recognition. Considering that the input features are more complicated and distinguishable, we use Support Vector Machine (SVM) to construct the classifier, and then compare the extracted test voice features with the standard voice features to achieve the spoken standard detection. Experiments show that the method of extracting features from spectrograms using PCNN is feasible, and the fusion of image features and spectral features can improve the detection accuracy.

  15. Automatic detection and recognition of multiple macular lesions in retinal optical coherence tomography images with multi-instance multilabel learning

    NASA Astrophysics Data System (ADS)

    Fang, Leyuan; Yang, Liumao; Li, Shutao; Rabbani, Hossein; Liu, Zhimin; Peng, Qinghua; Chen, Xiangdong

    2017-06-01

    Detection and recognition of macular lesions in optical coherence tomography (OCT) are very important for retinal diseases diagnosis and treatment. As one kind of retinal disease (e.g., diabetic retinopathy) may contain multiple lesions (e.g., edema, exudates, and microaneurysms) and eye patients may suffer from multiple retinal diseases, multiple lesions often coexist within one retinal image. Therefore, one single-lesion-based detector may not support the diagnosis of clinical eye diseases. To address this issue, we propose a multi-instance multilabel-based lesions recognition (MIML-LR) method for the simultaneous detection and recognition of multiple lesions. The proposed MIML-LR method consists of the following steps: (1) segment the regions of interest (ROIs) for different lesions, (2) compute descriptive instances (features) for each lesion region, (3) construct multilabel detectors, and (4) recognize each ROI with the detectors. The proposed MIML-LR method was tested on 823 clinically labeled OCT images with normal macular and macular with three common lesions: epiretinal membrane, edema, and drusen. For each input OCT image, our MIML-LR method can automatically identify the number of lesions and assign the class labels, achieving the average accuracy of 88.72% for the cases with multiple lesions, which better assists macular disease diagnosis and treatment.

  16. Using multiple-accumulator CMACs to improve efficiency of the X part of an input-buffered FX correlator

    NASA Astrophysics Data System (ADS)

    Lapshev, Stepan; Hasan, S. M. Rezaul

    2017-04-01

    This paper presents the approach of using complex multiplier-accumulators (CMACs) with multiple accumulators to reduce the total number of memory operations in an input-buffered architecture for the X part of an FX correlator. A processing unit of this architecture uses an array of CMACs that are reused for different groups of baselines. The disadvantage of processing correlations in this way is that each input data sample has to be read multiple times from the memory because each input signal is used in many of these baseline groups. While a one-accumulator CMAC cannot switch to a different baseline until it is finished integrating the current one, a multiple-accumulator CMAC can. Thus, the array of multiple-accumulator CMACs can switch between processing different baselines that share some input signals at any moment to reuse the current data in the processing buffers. In this way significant reductions in the number of memory read operations are achieved with only a few accumulators per CMAC. For example, for a large number of input signals three-accumulator CMACs reduce the total number of memory operations by more than a third. Simulated energy measurements of four VLSI designs in a high-performance 28 nm CMOS technology are presented in this paper to demonstrate that using multiple accumulators can also lead to reduced power dissipation of the processing array. Using three accumulators as opposed to one has been found to reduce the overall energy of 8-bit CMACs by 1.4% through the reduction of the switching activity within their circuits, which is in addition to a more than 30% reduction in the memory.

  17. Comparison of Vehicle Choice Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stephens, Thomas S.; Levinson, Rebecca S.; Brooker, Aaron

    Five consumer vehicle choice models that give projections of future sales shares of light-duty vehicles were compared by running each model using the same inputs, where possible, for two scenarios. The five models compared — LVCFlex, MA3T, LAVE-Trans, ParaChoice, and ADOPT — have been used in support of the Energy Efficiency and Renewable Energy (EERE) Vehicle Technologies Office in analyses of future light-duty vehicle markets under different assumptions about future vehicle technologies and market conditions. The models give projections of sales shares by powertrain technology. Projections made using common, but not identical, inputs showed qualitative agreement, with the exception ofmore » ADOPT. ADOPT estimated somewhat lower advanced vehicle shares, mostly composed of hybrid electric vehicles. Other models projected large shares of multiple advanced vehicle powertrains. Projections of models differed in significant ways, including how different technologies penetrated cars and light trucks. Since the models are constructed differently and take different inputs, not all inputs were identical, but were the same or very similar where possible. Projections by all models were in close agreement only in the first few years. Although the projections from LVCFlex, MA3T, LAVE-Trans, and ParaChoice were in qualitative agreement, there were significant differences in sales shares given by the different models for individual powertrain types, particularly in later years (2030 and later). For example, projected sales shares of conventional spark-ignition vehicles in 2030 for a given scenario ranged from 35% to 74%. Reasons for such differences are discussed, recognizing that these models were not developed to give quantitatively accurate predictions of future sales shares, but to represent vehicles markets realistically and capture the connections between sales and important influences. Model features were also compared at a high level, and suggestions for further comparison of models are given to enable better understanding of how different features and algorithms used in these models may give different projections.« less

  18. Operator control systems and methods for swing-free gantry-style cranes

    DOEpatents

    Feddema, J.T.; Petterson, B.J.; Robinett, R.D. III

    1998-07-28

    A system and method are disclosed for eliminating swing motions in gantry-style cranes while subject to operator control. The present invention comprises an infinite impulse response (IIR) filter and a proportional-integral (PI) feedback controller. The IIR filter receives input signals (commanded velocity or acceleration) from an operator input device and transforms them into output signals in such a fashion that the resulting motion is swing free (i.e., end-point swinging prevented). The parameters of the IIR filter are updated in real time using measurements from a hoist cable length encoder. The PI feedback controller compensates for modeling errors and external disturbances, such as wind or perturbations caused by collision with objects. The PI feedback controller operates on cable swing angle measurements provided by a cable angle sensor. The present invention adjusts acceleration and deceleration to eliminate oscillations. An especially important feature of the present invention is that it compensates for variable-length cable motions from multiple cables attached to a suspended payload. 10 figs.

  19. Operator control systems and methods for swing-free gantry-style cranes

    DOEpatents

    Feddema, John T.; Petterson, Ben J.; Robinett, III, Rush D.

    1998-01-01

    A system and method for eliminating swing motions in gantry-style cranes while subject to operator control is presented. The present invention comprises an infinite impulse response ("IIR") filter and a proportional-integral ("PI") feedback controller (50). The IIR filter receives input signals (46) (commanded velocity or acceleration) from an operator input device (45) and transforms them into output signals (47) in such a fashion that the resulting motion is swing free (i.e., end-point swinging prevented). The parameters of the IIR filter are updated in real time using measurements from a hoist cable length encoder (25). The PI feedback controller compensates for modeling errors and external disturbances, such as wind or perturbations caused by collision with objects. The PI feedback controller operates on cable swing angle measurements provided by a cable angle sensor (27). The present invention adjusts acceleration and deceleration to eliminate oscillations. An especially important feature of the present invention is that it compensates for variable-length cable motions from multiple cables attached to a suspended payload.

  20. Transfer Learning for Improved Audio-Based Human Activity Recognition.

    PubMed

    Ntalampiras, Stavros; Potamitis, Ilyas

    2018-06-25

    Human activities are accompanied by characteristic sound events, the processing of which might provide valuable information for automated human activity recognition. This paper presents a novel approach addressing the case where one or more human activities are associated with limited audio data, resulting in a potentially highly imbalanced dataset. Data augmentation is based on transfer learning; more specifically, the proposed method: (a) identifies the classes which are statistically close to the ones associated with limited data; (b) learns a multiple input, multiple output transformation; and (c) transforms the data of the closest classes so that it can be used for modeling the ones associated with limited data. Furthermore, the proposed framework includes a feature set extracted out of signal representations of diverse domains, i.e., temporal, spectral, and wavelet. Extensive experiments demonstrate the relevance of the proposed data augmentation approach under a variety of generative recognition schemes.

  1. Region-Based Prediction for Image Compression in the Cloud.

    PubMed

    Begaint, Jean; Thoreau, Dominique; Guillotel, Philippe; Guillemot, Christine

    2018-04-01

    Thanks to the increasing number of images stored in the cloud, external image similarities can be leveraged to efficiently compress images by exploiting inter-images correlations. In this paper, we propose a novel image prediction scheme for cloud storage. Unlike current state-of-the-art methods, we use a semi-local approach to exploit inter-image correlation. The reference image is first segmented into multiple planar regions determined from matched local features and super-pixels. The geometric and photometric disparities between the matched regions of the reference image and the current image are then compensated. Finally, multiple references are generated from the estimated compensation models and organized in a pseudo-sequence to differentially encode the input image using classical video coding tools. Experimental results demonstrate that the proposed approach yields significant rate-distortion performance improvements compared with the current image inter-coding solutions such as high efficiency video coding.

  2. A theory of cerebellar cortex and adaptive motor control based on two types of universal function approximation capability.

    PubMed

    Fujita, Masahiko

    2016-03-01

    Lesions of the cerebellum result in large errors in movements. The cerebellum adaptively controls the strength and timing of motor command signals depending on the internal and external environments of movements. The present theory describes how the cerebellar cortex can control signals for accurate and timed movements. A model network of the cerebellar Golgi and granule cells is shown to be equivalent to a multiple-input (from mossy fibers) hierarchical neural network with a single hidden layer of threshold units (granule cells) that receive a common recurrent inhibition (from a Golgi cell). The weighted sum of the hidden unit signals (Purkinje cell output) is theoretically analyzed regarding the capability of the network to perform two types of universal function approximation. The hidden units begin firing as the excitatory inputs exceed the recurrent inhibition. This simple threshold feature leads to the first approximation theory, and the network final output can be any continuous function of the multiple inputs. When the input is constant, this output becomes stationary. However, when the recurrent unit activity is triggered to decrease or the recurrent inhibition is triggered to increase through a certain mechanism (metabotropic modulation or extrasynaptic spillover), the network can generate any continuous signals for a prolonged period of change in the activity of recurrent signals, as the second approximation theory shows. By incorporating the cerebellar capability of two such types of approximations to a motor system, in which learning proceeds through repeated movement trials with accompanying corrections, accurate and timed responses for reaching the target can be adaptively acquired. Simple models of motor control can solve the motor error vs. sensory error problem, as well as the structural aspects of credit (or error) assignment problem. Two physiological experiments are proposed for examining the delay and trace conditioning of eyelid responses, as well as saccade adaptation, to investigate this novel idea of cerebellar processing. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. Inferring Nonlinear Neuronal Computation Based on Physiologically Plausible Inputs

    PubMed Central

    McFarland, James M.; Cui, Yuwei; Butts, Daniel A.

    2013-01-01

    The computation represented by a sensory neuron's response to stimuli is constructed from an array of physiological processes both belonging to that neuron and inherited from its inputs. Although many of these physiological processes are known to be nonlinear, linear approximations are commonly used to describe the stimulus selectivity of sensory neurons (i.e., linear receptive fields). Here we present an approach for modeling sensory processing, termed the Nonlinear Input Model (NIM), which is based on the hypothesis that the dominant nonlinearities imposed by physiological mechanisms arise from rectification of a neuron's inputs. Incorporating such ‘upstream nonlinearities’ within the standard linear-nonlinear (LN) cascade modeling structure implicitly allows for the identification of multiple stimulus features driving a neuron's response, which become directly interpretable as either excitatory or inhibitory. Because its form is analogous to an integrate-and-fire neuron receiving excitatory and inhibitory inputs, model fitting can be guided by prior knowledge about the inputs to a given neuron, and elements of the resulting model can often result in specific physiological predictions. Furthermore, by providing an explicit probabilistic model with a relatively simple nonlinear structure, its parameters can be efficiently optimized and appropriately regularized. Parameter estimation is robust and efficient even with large numbers of model components and in the context of high-dimensional stimuli with complex statistical structure (e.g. natural stimuli). We describe detailed methods for estimating the model parameters, and illustrate the advantages of the NIM using a range of example sensory neurons in the visual and auditory systems. We thus present a modeling framework that can capture a broad range of nonlinear response functions while providing physiologically interpretable descriptions of neural computation. PMID:23874185

  4. ESIM: Edge Similarity for Screen Content Image Quality Assessment.

    PubMed

    Ni, Zhangkai; Ma, Lin; Zeng, Huanqiang; Chen, Jing; Cai, Canhui; Ma, Kai-Kuang

    2017-10-01

    In this paper, an accurate full-reference image quality assessment (IQA) model developed for assessing screen content images (SCIs), called the edge similarity (ESIM), is proposed. It is inspired by the fact that the human visual system (HVS) is highly sensitive to edges that are often encountered in SCIs; therefore, essential edge features are extracted and exploited for conducting IQA for the SCIs. The key novelty of the proposed ESIM lies in the extraction and use of three salient edge features-i.e., edge contrast, edge width, and edge direction. The first two attributes are simultaneously generated from the input SCI based on a parametric edge model, while the last one is derived directly from the input SCI. The extraction of these three features will be performed for the reference SCI and the distorted SCI, individually. The degree of similarity measured for each above-mentioned edge attribute is then computed independently, followed by combining them together using our proposed edge-width pooling strategy to generate the final ESIM score. To conduct the performance evaluation of our proposed ESIM model, a new and the largest SCI database (denoted as SCID) is established in our work and made to the public for download. Our database contains 1800 distorted SCIs that are generated from 40 reference SCIs. For each SCI, nine distortion types are investigated, and five degradation levels are produced for each distortion type. Extensive simulation results have clearly shown that the proposed ESIM model is more consistent with the perception of the HVS on the evaluation of distorted SCIs than the multiple state-of-the-art IQA methods.

  5. Using Differential Evolution to Optimize Learning from Signals and Enhance Network Security

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harmer, Paul K; Temple, Michael A; Buckner, Mark A

    2011-01-01

    Computer and communication network attacks are commonly orchestrated through Wireless Access Points (WAPs). This paper summarizes proof-of-concept research activity aimed at developing a physical layer Radio Frequency (RF) air monitoring capability to limit unauthorizedWAP access and mprove network security. This is done using Differential Evolution (DE) to optimize the performance of a Learning from Signals (LFS) classifier implemented with RF Distinct Native Attribute (RF-DNA) fingerprints. Performance of the resultant DE-optimized LFS classifier is demonstrated using 802.11a WiFi devices under the most challenging conditions of intra-manufacturer classification, i.e., using emissions of like-model devices that only differ in serial number. Using identicalmore » classifier input features, performance of the DE-optimized LFS classifier is assessed relative to a Multiple Discriminant Analysis / Maximum Likelihood (MDA/ML) classifier that has been used for previous demonstrations. The comparative assessment is made using both Time Domain (TD) and Spectral Domain (SD) fingerprint features. For all combinations of classifier type, feature type, and signal-to-noise ratio considered, results show that the DEoptimized LFS classifier with TD features is uperior and provides up to 20% improvement in classification accuracy with proper selection of DE parameters.« less

  6. IMAGE 100: The interactive multispectral image processing system

    NASA Technical Reports Server (NTRS)

    Schaller, E. S.; Towles, R. W.

    1975-01-01

    The need for rapid, cost-effective extraction of useful information from vast quantities of multispectral imagery available from aircraft or spacecraft has resulted in the design, implementation and application of a state-of-the-art processing system known as IMAGE 100. Operating on the general principle that all objects or materials possess unique spectral characteristics or signatures, the system uses this signature uniqueness to identify similar features in an image by simultaneously analyzing signatures in multiple frequency bands. Pseudo-colors, or themes, are assigned to features having identical spectral characteristics. These themes are displayed on a color CRT, and may be recorded on tape, film, or other media. The system was designed to incorporate key features such as interactive operation, user-oriented displays and controls, and rapid-response machine processing. Owing to these features, the user can readily control and/or modify the analysis process based on his knowledge of the input imagery. Effective use can be made of conventional photographic interpretation skills and state-of-the-art machine analysis techniques in the extraction of useful information from multispectral imagery. This approach results in highly accurate multitheme classification of imagery in seconds or minutes rather than the hours often involved in processing using other means.

  7. Word-level recognition of multifont Arabic text using a feature vector matching approach

    NASA Astrophysics Data System (ADS)

    Erlandson, Erik J.; Trenkle, John M.; Vogt, Robert C., III

    1996-03-01

    Many text recognition systems recognize text imagery at the character level and assemble words from the recognized characters. An alternative approach is to recognize text imagery at the word level, without analyzing individual characters. This approach avoids the problem of individual character segmentation, and can overcome local errors in character recognition. A word-level recognition system for machine-printed Arabic text has been implemented. Arabic is a script language, and is therefore difficult to segment at the character level. Character segmentation has been avoided by recognizing text imagery of complete words. The Arabic recognition system computes a vector of image-morphological features on a query word image. This vector is matched against a precomputed database of vectors from a lexicon of Arabic words. Vectors from the database with the highest match score are returned as hypotheses for the unknown image. Several feature vectors may be stored for each word in the database. Database feature vectors generated using multiple fonts and noise models allow the system to be tuned to its input stream. Used in conjunction with database pruning techniques, this Arabic recognition system has obtained promising word recognition rates on low-quality multifont text imagery.

  8. Multi-valued logic gates based on ballistic transport in quantum point contacts.

    PubMed

    Seo, M; Hong, C; Lee, S-Y; Choi, H K; Kim, N; Chung, Y; Umansky, V; Mahalu, D

    2014-01-22

    Multi-valued logic gates, which can handle quaternary numbers as inputs, are developed by exploiting the ballistic transport properties of quantum point contacts in series. The principle of a logic gate that finds the minimum of two quaternary number inputs is demonstrated. The device is scalable to allow multiple inputs, which makes it possible to find the minimum of multiple inputs in a single gate operation. Also, the principle of a half-adder for quaternary number inputs is demonstrated. First, an adder that adds up two quaternary numbers and outputs the sum of inputs is demonstrated. Second, a device to express the sum of the adder into two quaternary digits [Carry (first digit) and Sum (second digit)] is demonstrated. All the logic gates presented in this paper can in principle be extended to allow decimal number inputs with high quality QPCs.

  9. Understanding Evolutionary Potential in Virtual CPU Instruction Set Architectures

    PubMed Central

    Bryson, David M.; Ofria, Charles

    2013-01-01

    We investigate fundamental decisions in the design of instruction set architectures for linear genetic programs that are used as both model systems in evolutionary biology and underlying solution representations in evolutionary computation. We subjected digital organisms with each tested architecture to seven different computational environments designed to present a range of evolutionary challenges. Our goal was to engineer a general purpose architecture that would be effective under a broad range of evolutionary conditions. We evaluated six different types of architectural features for the virtual CPUs: (1) genetic flexibility: we allowed digital organisms to more precisely modify the function of genetic instructions, (2) memory: we provided an increased number of registers in the virtual CPUs, (3) decoupled sensors and actuators: we separated input and output operations to enable greater control over data flow. We also tested a variety of methods to regulate expression: (4) explicit labels that allow programs to dynamically refer to specific genome positions, (5) position-relative search instructions, and (6) multiple new flow control instructions, including conditionals and jumps. Each of these features also adds complication to the instruction set and risks slowing evolution due to epistatic interactions. Two features (multiple argument specification and separated I/O) demonstrated substantial improvements in the majority of test environments, along with versions of each of the remaining architecture modifications that show significant improvements in multiple environments. However, some tested modifications were detrimental, though most exhibit no systematic effects on evolutionary potential, highlighting the robustness of digital evolution. Combined, these observations enhance our understanding of how instruction architecture impacts evolutionary potential, enabling the creation of architectures that support more rapid evolution of complex solutions to a broad range of challenges. PMID:24376669

  10. Eye-gaze and intent: Application in 3D interface control

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schryver, J.C.; Goldberg, J.H.

    1993-06-01

    Computer interface control is typically accomplished with an input ``device`` such as keyboard, mouse, trackball, etc. An input device translates a users input actions, such as mouse clicks and key presses, into appropriate computer commands. To control the interface, the user must first convert intent into the syntax of the input device. A more natural means of computer control is possible when the computer can directly infer user intent, without need of intervening input devices. We describe an application of eye-gaze-contingent control of an interactive three-dimensional (3D) user interface. A salient feature of the user interface is natural input, withmore » a heightened impression of controlling the computer directly by the mind. With this interface, input of rotation and translation are intuitive, whereas other abstract features, such as zoom, are more problematic to match with user intent. This paper describes successes with implementation to date, and ongoing efforts to develop a more sophisticated intent inferencing methodology.« less

  11. Eye-gaze and intent: Application in 3D interface control

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schryver, J.C.; Goldberg, J.H.

    1993-01-01

    Computer interface control is typically accomplished with an input device'' such as keyboard, mouse, trackball, etc. An input device translates a users input actions, such as mouse clicks and key presses, into appropriate computer commands. To control the interface, the user must first convert intent into the syntax of the input device. A more natural means of computer control is possible when the computer can directly infer user intent, without need of intervening input devices. We describe an application of eye-gaze-contingent control of an interactive three-dimensional (3D) user interface. A salient feature of the user interface is natural input, withmore » a heightened impression of controlling the computer directly by the mind. With this interface, input of rotation and translation are intuitive, whereas other abstract features, such as zoom, are more problematic to match with user intent. This paper describes successes with implementation to date, and ongoing efforts to develop a more sophisticated intent inferencing methodology.« less

  12. Intelligent fault diagnosis of rolling bearings using an improved deep recurrent neural network

    NASA Astrophysics Data System (ADS)

    Jiang, Hongkai; Li, Xingqiu; Shao, Haidong; Zhao, Ke

    2018-06-01

    Traditional intelligent fault diagnosis methods for rolling bearings heavily depend on manual feature extraction and feature selection. For this purpose, an intelligent deep learning method, named the improved deep recurrent neural network (DRNN), is proposed in this paper. Firstly, frequency spectrum sequences are used as inputs to reduce the input size and ensure good robustness. Secondly, DRNN is constructed by the stacks of the recurrent hidden layer to automatically extract the features from the input spectrum sequences. Thirdly, an adaptive learning rate is adopted to improve the training performance of the constructed DRNN. The proposed method is verified with experimental rolling bearing data, and the results confirm that the proposed method is more effective than traditional intelligent fault diagnosis methods.

  13. Deep convolutional neural network based antenna selection in multiple-input multiple-output system

    NASA Astrophysics Data System (ADS)

    Cai, Jiaxin; Li, Yan; Hu, Ying

    2018-03-01

    Antenna selection of wireless communication system has attracted increasing attention due to the challenge of keeping a balance between communication performance and computational complexity in large-scale Multiple-Input MultipleOutput antenna systems. Recently, deep learning based methods have achieved promising performance for large-scale data processing and analysis in many application fields. This paper is the first attempt to introduce the deep learning technique into the field of Multiple-Input Multiple-Output antenna selection in wireless communications. First, the label of attenuation coefficients channel matrix is generated by minimizing the key performance indicator of training antenna systems. Then, a deep convolutional neural network that explicitly exploits the massive latent cues of attenuation coefficients is learned on the training antenna systems. Finally, we use the adopted deep convolutional neural network to classify the channel matrix labels of test antennas and select the optimal antenna subset. Simulation experimental results demonstrate that our method can achieve better performance than the state-of-the-art baselines for data-driven based wireless antenna selection.

  14. Domain-General Brain Regions Do Not Track Linguistic Input as Closely as Language-Selective Regions.

    PubMed

    Blank, Idan A; Fedorenko, Evelina

    2017-10-11

    Language comprehension engages a cortical network of left frontal and temporal regions. Activity in this network is language-selective, showing virtually no modulation by nonlinguistic tasks. In addition, language comprehension engages a second network consisting of bilateral frontal, parietal, cingulate, and insular regions. Activity in this "multiple demand" (MD) network scales with comprehension difficulty, but also with cognitive effort across a wide range of nonlinguistic tasks in a domain-general fashion. Given the functional dissociation between the language and MD networks, their respective contributions to comprehension are likely distinct, yet such differences remain elusive. Prior neuroimaging studies have suggested that activity in each network covaries with some linguistic features that, behaviorally, influence on-line processing and comprehension. This sensitivity of the language and MD networks to local input characteristics has often been interpreted, implicitly or explicitly, as evidence that both networks track linguistic input closely, and in a manner consistent across individuals. Here, we used fMRI to directly test this assumption by comparing the BOLD signal time courses in each network across different people ( n = 45, men and women) listening to the same story. Language network activity showed fewer individual differences, indicative of closer input tracking, whereas MD network activity was more idiosyncratic and, moreover, showed lower reliability within an individual across repetitions of a story. These findings constrain cognitive models of language comprehension by suggesting a novel distinction between the processes implemented in the language and MD networks. SIGNIFICANCE STATEMENT Language comprehension recruits both language-specific mechanisms and domain-general mechanisms that are engaged in many cognitive processes. In the human cortex, language-selective mechanisms are implemented in the left-lateralized "core language network", whereas domain-general mechanisms are implemented in the bilateral "multiple demand" (MD) network. Here, we report the first direct comparison of the respective contributions of these networks to naturalistic story comprehension. Using a novel combination of neuroimaging approaches we find that MD regions track stories less closely than language regions. This finding constrains the possible contributions of the MD network to comprehension, contrasts with accounts positing that this network has continuous access to linguistic input, and suggests a new typology of comprehension processes based on their extent of input tracking. Copyright © 2017 the authors 0270-6474/17/3710000-13$15.00/0.

  15. Domain-General Brain Regions Do Not Track Linguistic Input as Closely as Language-Selective Regions

    PubMed Central

    Fedorenko, Evelina

    2017-01-01

    Language comprehension engages a cortical network of left frontal and temporal regions. Activity in this network is language-selective, showing virtually no modulation by nonlinguistic tasks. In addition, language comprehension engages a second network consisting of bilateral frontal, parietal, cingulate, and insular regions. Activity in this “multiple demand” (MD) network scales with comprehension difficulty, but also with cognitive effort across a wide range of nonlinguistic tasks in a domain-general fashion. Given the functional dissociation between the language and MD networks, their respective contributions to comprehension are likely distinct, yet such differences remain elusive. Prior neuroimaging studies have suggested that activity in each network covaries with some linguistic features that, behaviorally, influence on-line processing and comprehension. This sensitivity of the language and MD networks to local input characteristics has often been interpreted, implicitly or explicitly, as evidence that both networks track linguistic input closely, and in a manner consistent across individuals. Here, we used fMRI to directly test this assumption by comparing the BOLD signal time courses in each network across different people (n = 45, men and women) listening to the same story. Language network activity showed fewer individual differences, indicative of closer input tracking, whereas MD network activity was more idiosyncratic and, moreover, showed lower reliability within an individual across repetitions of a story. These findings constrain cognitive models of language comprehension by suggesting a novel distinction between the processes implemented in the language and MD networks. SIGNIFICANCE STATEMENT Language comprehension recruits both language-specific mechanisms and domain-general mechanisms that are engaged in many cognitive processes. In the human cortex, language-selective mechanisms are implemented in the left-lateralized “core language network”, whereas domain-general mechanisms are implemented in the bilateral “multiple demand” (MD) network. Here, we report the first direct comparison of the respective contributions of these networks to naturalistic story comprehension. Using a novel combination of neuroimaging approaches we find that MD regions track stories less closely than language regions. This finding constrains the possible contributions of the MD network to comprehension, contrasts with accounts positing that this network has continuous access to linguistic input, and suggests a new typology of comprehension processes based on their extent of input tracking. PMID:28871034

  16. Role of intraglomerular circuits in shaping temporally structured responses to naturalistic inhalation-driven sensory input to the olfactory bulb

    PubMed Central

    Carey, Ryan M.; Sherwood, William Erik; Shipley, Michael T.; Borisyuk, Alla

    2015-01-01

    Olfaction in mammals is a dynamic process driven by the inhalation of air through the nasal cavity. Inhalation determines the temporal structure of sensory neuron responses and shapes the neural dynamics underlying central olfactory processing. Inhalation-linked bursts of activity among olfactory bulb (OB) output neurons [mitral/tufted cells (MCs)] are temporally transformed relative to those of sensory neurons. We investigated how OB circuits shape inhalation-driven dynamics in MCs using a modeling approach that was highly constrained by experimental results. First, we constructed models of canonical OB circuits that included mono- and disynaptic feedforward excitation, recurrent inhibition and feedforward inhibition of the MC. We then used experimental data to drive inputs to the models and to tune parameters; inputs were derived from sensory neuron responses during natural odorant sampling (sniffing) in awake rats, and model output was compared with recordings of MC responses to odorants sampled with the same sniff waveforms. This approach allowed us to identify OB circuit features underlying the temporal transformation of sensory inputs into inhalation-linked patterns of MC spike output. We found that realistic input-output transformations can be achieved independently by multiple circuits, including feedforward inhibition with slow onset and decay kinetics and parallel feedforward MC excitation mediated by external tufted cells. We also found that recurrent and feedforward inhibition had differential impacts on MC firing rates and on inhalation-linked response dynamics. These results highlight the importance of investigating neural circuits in a naturalistic context and provide a framework for further explorations of signal processing by OB networks. PMID:25717156

  17. Deep Convolutional Extreme Learning Machine and Its Application in Handwritten Digit Classification

    PubMed Central

    Yang, Xinyi

    2016-01-01

    In recent years, some deep learning methods have been developed and applied to image classification applications, such as convolutional neuron network (CNN) and deep belief network (DBN). However they are suffering from some problems like local minima, slow convergence rate, and intensive human intervention. In this paper, we propose a rapid learning method, namely, deep convolutional extreme learning machine (DC-ELM), which combines the power of CNN and fast training of ELM. It uses multiple alternate convolution layers and pooling layers to effectively abstract high level features from input images. Then the abstracted features are fed to an ELM classifier, which leads to better generalization performance with faster learning speed. DC-ELM also introduces stochastic pooling in the last hidden layer to reduce dimensionality of features greatly, thus saving much training time and computation resources. We systematically evaluated the performance of DC-ELM on two handwritten digit data sets: MNIST and USPS. Experimental results show that our method achieved better testing accuracy with significantly shorter training time in comparison with deep learning methods and other ELM methods. PMID:27610128

  18. Multiple-Primitives Hierarchical Classification of Airborne Laser Scanning Data in Urban Areas

    NASA Astrophysics Data System (ADS)

    Ni, H.; Lin, X. G.; Zhang, J. X.

    2017-09-01

    A hierarchical classification method for Airborne Laser Scanning (ALS) data of urban areas is proposed in this paper. This method is composed of three stages among which three types of primitives are utilized, i.e., smooth surface, rough surface, and individual point. In the first stage, the input ALS data is divided into smooth surfaces and rough surfaces by employing a step-wise point cloud segmentation method. In the second stage, classification based on smooth surfaces and rough surfaces is performed. Points in the smooth surfaces are first classified into ground and buildings based on semantic rules. Next, features of rough surfaces are extracted. Then, points in rough surfaces are classified into vegetation and vehicles based on the derived features and Random Forests (RF). In the third stage, point-based features are extracted for the ground points, and then, an individual point classification procedure is performed to classify the ground points into bare land, artificial ground and greenbelt. Moreover, the shortages of the existing studies are analyzed, and experiments show that the proposed method overcomes these shortages and handles more types of objects.

  19. Deep Convolutional Extreme Learning Machine and Its Application in Handwritten Digit Classification.

    PubMed

    Pang, Shan; Yang, Xinyi

    2016-01-01

    In recent years, some deep learning methods have been developed and applied to image classification applications, such as convolutional neuron network (CNN) and deep belief network (DBN). However they are suffering from some problems like local minima, slow convergence rate, and intensive human intervention. In this paper, we propose a rapid learning method, namely, deep convolutional extreme learning machine (DC-ELM), which combines the power of CNN and fast training of ELM. It uses multiple alternate convolution layers and pooling layers to effectively abstract high level features from input images. Then the abstracted features are fed to an ELM classifier, which leads to better generalization performance with faster learning speed. DC-ELM also introduces stochastic pooling in the last hidden layer to reduce dimensionality of features greatly, thus saving much training time and computation resources. We systematically evaluated the performance of DC-ELM on two handwritten digit data sets: MNIST and USPS. Experimental results show that our method achieved better testing accuracy with significantly shorter training time in comparison with deep learning methods and other ELM methods.

  20. WORM - WINDOWED OBSERVATION OF RELATIVE MOTION

    NASA Technical Reports Server (NTRS)

    Bauer, F.

    1994-01-01

    The Windowed Observation of Relative Motion, WORM, program is primarily intended for the generation of simple X-Y plots from data created by other programs. It allows the user to label, zoom, and change the scale of various plots. Three dimensional contour and line plots are provided, although with more limited capabilities. The input data can be in binary or ASCII format, although all data must be in the same format. A great deal of control over the details of the plot is provided, such as gridding, size of tick marks, colors, log/semilog capability, time tagging, and multiple and phase plane plots. Many color and monochrome graphics terminals and hard copy printer/plotters are supported. The WORM executive commands, menu selections and macro files can be used to develop plots and tabular data, query the WORM Help library, retrieve data from input files, and invoke VAX DCL commands. WORM generated plots are displayed on local graphics terminals and can be copied using standard hard copy capabilities. Some of the graphics features of WORM include: zooming and dezooming various portions of the plot; plot documentation including curve labeling and function listing; multiple curves on the same plot; windowing of multiple plots and insets of the same plot; displaying a specific on a curve; and spinning the curve left, right, up, and down. WORM is written in PASCAL for interactive execution and has been implemented on a DEC VAX computer operating under VMS 4.7 with a virtual memory requirement of approximately 392K of 8 bit bytes. It uses the QPLOT device independent graphics library included with WORM. It was developed in 1988.

  1. Unattended Multiplicity Shift Register

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Newell, Matt; Jones, David C.

    2017-01-16

    The Unattended Multiplicity Shift Register (UMSR) is a specialized pulse counter used primarily to count neutron events originating in neutron detection instruments. While the counter can be used to count any TTL input pulses, its unique ability to record time correlated events and the multiplicity distributions of these events makes it an ideal instrument for counting neutron events in the nuclear fields of material safeguards, waste assay and process monitoring and control. The UMSR combines the Los Alamos National Laboratory (LANL) simple and robust shift register design with a Commercial-Off-The-Shelf (COTS) processor and Ethernet communications. The UMSR is fully compatiblemore » with existing International Atomic Energy Agency (IAEA) neutron data acquisition instruments such as the Advance Multiplicity Shift Register (AMSR) and JSR-15. The UMSR has three input channels: a multiplicity shift register input and two auxiliary inputs. The UMSR provides 0V to 2kV of programmable High Voltage (HV) bias and both a 12V and a 5V detector power supply output. A serial over USB communication line to the UMSR allows the use of existing versions of INCC or MIC software while the Ethernet port is compatible with the new IAEA RAINSTORM communication protocol.« less

  2. Feature maps driven no-reference image quality prediction of authentically distorted images

    NASA Astrophysics Data System (ADS)

    Ghadiyaram, Deepti; Bovik, Alan C.

    2015-03-01

    Current blind image quality prediction models rely on benchmark databases comprised of singly and synthetically distorted images, thereby learning image features that are only adequate to predict human perceived visual quality on such inauthentic distortions. However, real world images often contain complex mixtures of multiple distortions. Rather than a) discounting the effect of these mixtures of distortions on an image's perceptual quality and considering only the dominant distortion or b) using features that are only proven to be efficient for singly distorted images, we deeply study the natural scene statistics of authentically distorted images, in different color spaces and transform domains. We propose a feature-maps-driven statistical approach which avoids any latent assumptions about the type of distortion(s) contained in an image, and focuses instead on modeling the remarkable consistencies in the scene statistics of real world images in the absence of distortions. We design a deep belief network that takes model-based statistical image features derived from a very large database of authentically distorted images as input and discovers good feature representations by generalizing over different distortion types, mixtures, and severities, which are later used to learn a regressor for quality prediction. We demonstrate the remarkable competence of our features for improving automatic perceptual quality prediction on a benchmark database and on the newly designed LIVE Authentic Image Quality Challenge Database and show that our approach of combining robust statistical features and the deep belief network dramatically outperforms the state-of-the-art.

  3. Optimal inverse functions created via population-based optimization.

    PubMed

    Jennings, Alan L; Ordóñez, Raúl

    2014-06-01

    Finding optimal inputs for a multiple-input, single-output system is taxing for a system operator. Population-based optimization is used to create sets of functions that produce a locally optimal input based on a desired output. An operator or higher level planner could use one of the functions in real time. For the optimization, each agent in the population uses the cost and output gradients to take steps lowering the cost while maintaining their current output. When an agent reaches an optimal input for its current output, additional agents are generated in the output gradient directions. The new agents then settle to the local optima for the new output values. The set of associated optimal points forms an inverse function, via spline interpolation, from a desired output to an optimal input. In this manner, multiple locally optimal functions can be created. These functions are naturally clustered in input and output spaces allowing for a continuous inverse function. The operator selects the best cluster over the anticipated range of desired outputs and adjusts the set point (desired output) while maintaining optimality. This reduces the demand from controlling multiple inputs, to controlling a single set point with no loss in performance. Results are demonstrated on a sample set of functions and on a robot control problem.

  4. Continuous and simultaneous estimation of finger kinematics using inputs from an EMG-to-muscle activation model.

    PubMed

    Ngeo, Jimson G; Tamei, Tomoya; Shibata, Tomohiro

    2014-08-14

    Surface electromyography (EMG) signals are often used in many robot and rehabilitation applications because these reflect motor intentions of users very well. However, very few studies have focused on the accurate and proportional control of the human hand using EMG signals. Many have focused on discrete gesture classification and some have encountered inherent problems such as electro-mechanical delays (EMD). Here, we present a new method for estimating simultaneous and multiple finger kinematics from multi-channel surface EMG signals. In this study, surface EMG signals from the forearm and finger kinematic data were extracted from ten able-bodied subjects while they were tasked to do individual and simultaneous multiple finger flexion and extension movements in free space. Instead of using traditional time-domain features of EMG, an EMG-to-Muscle Activation model that parameterizes EMD was used and shown to give better estimation performance. A fast feed forward artificial neural network (ANN) and a nonparametric Gaussian Process (GP) regressor were both used and evaluated to estimate complex finger kinematics, with the latter rarely used in the other related literature. The estimation accuracies, in terms of mean correlation coefficient, were 0.85 ± 0.07, 0.78 ± 0.06 and 0.73 ± 0.04 for the metacarpophalangeal (MCP), proximal interphalangeal (PIP) and the distal interphalangeal (DIP) finger joint DOFs, respectively. The mean root-mean-square error in each individual DOF ranged from 5 to 15%. We show that estimation improved using the proposed muscle activation inputs compared to other features, and that using GP regression gave better estimation results when using fewer training samples. The proposed method provides a viable means of capturing the general trend of finger movements and shows a good way of estimating finger joint kinematics using a muscle activation model that parameterizes EMD. The results from this study demonstrates a potential control strategy based on EMG that can be applied for simultaneous and continuous control of multiple DOF(s) devices such as robotic hand/finger prostheses or exoskeletons.

  5. The prediction of airborne and structure-borne noise potential for a tire

    NASA Astrophysics Data System (ADS)

    Sakamoto, Nicholas Y.

    Tire/pavement interaction noise is a major component of both exterior pass-by noise and vehicle interior noise. The current testing methods for ranking tires from loud to quiet require expensive equipment, multiple tires, and/or long experimental set-up and run times. If a laboratory based off-vehicle test could be used to identify the airborne and structure-borne potential of a tire from its dynamic characteristics, a relative ranking of a large group of tires could be performed at relatively modest expense. This would provide a smaller sample set of tires for follow-up testing and thus save expense for automobile OEMs. The focus of this research was identifying key noise features from a tire/pavement experiment. These results were compared against a stationary tire test in which the natural response of the tire to a forced input was measured. Since speed was identified as having some effect on the noise, an input function was also developed to allow the tires to be ranked at an appropriate speed. A relative noise model was used on a second sample set of tires to verify if the ranking could be used against interior vehicle measurements. While overall level analysis of the specified spectrum had mixed success, important noise generating features were identified, and the methods used could be improved to develop a standard off-vehicle test to predict a tire's noise potential.

  6. Modeling Agricultural Watersheds with the Soil and Water Assessment Tool (SWAT): Calibration and Validation with a Novel Procedure for Spatially Explicit HRUs.

    PubMed

    Teshager, Awoke Dagnew; Gassman, Philip W; Secchi, Silvia; Schoof, Justin T; Misgna, Girmaye

    2016-04-01

    Applications of the Soil and Water Assessment Tool (SWAT) model typically involve delineation of a watershed into subwatersheds/subbasins that are then further subdivided into hydrologic response units (HRUs) which are homogeneous areas of aggregated soil, landuse, and slope and are the smallest modeling units used within the model. In a given standard SWAT application, multiple potential HRUs (farm fields) in a subbasin are usually aggregated into a single HRU feature. In other words, the standard version of the model combines multiple potential HRUs (farm fields) with the same landuse/landcover, soil, and slope, but located at different places of a subbasin (spatially non-unique), and considers them as one HRU. In this study, ArcGIS pre-processing procedures were developed to spatially define a one-to-one match between farm fields and HRUs (spatially unique HRUs) within a subbasin prior to SWAT simulations to facilitate input processing, input/output mapping, and further analysis at the individual farm field level. Model input data such as landuse/landcover (LULC), soil, crop rotation, and other management data were processed through these HRUs. The SWAT model was then calibrated/validated for Raccoon River watershed in Iowa for 2002-2010 and Big Creek River watershed in Illinois for 2000-2003. SWAT was able to replicate annual, monthly, and daily streamflow, as well as sediment, nitrate and mineral phosphorous within recommended accuracy in most cases. The one-to-one match between farm fields and HRUs created and used in this study is a first step in performing LULC change, climate change impact, and other analyses in a more spatially explicit manner.

  7. Modeling Agricultural Watersheds with the Soil and Water Assessment Tool (SWAT): Calibration and Validation with a Novel Procedure for Spatially Explicit HRUs

    NASA Astrophysics Data System (ADS)

    Teshager, Awoke Dagnew; Gassman, Philip W.; Secchi, Silvia; Schoof, Justin T.; Misgna, Girmaye

    2016-04-01

    Applications of the Soil and Water Assessment Tool (SWAT) model typically involve delineation of a watershed into subwatersheds/subbasins that are then further subdivided into hydrologic response units (HRUs) which are homogeneous areas of aggregated soil, landuse, and slope and are the smallest modeling units used within the model. In a given standard SWAT application, multiple potential HRUs (farm fields) in a subbasin are usually aggregated into a single HRU feature. In other words, the standard version of the model combines multiple potential HRUs (farm fields) with the same landuse/landcover, soil, and slope, but located at different places of a subbasin (spatially non-unique), and considers them as one HRU. In this study, ArcGIS pre-processing procedures were developed to spatially define a one-to-one match between farm fields and HRUs (spatially unique HRUs) within a subbasin prior to SWAT simulations to facilitate input processing, input/output mapping, and further analysis at the individual farm field level. Model input data such as landuse/landcover (LULC), soil, crop rotation, and other management data were processed through these HRUs. The SWAT model was then calibrated/validated for Raccoon River watershed in Iowa for 2002-2010 and Big Creek River watershed in Illinois for 2000-2003. SWAT was able to replicate annual, monthly, and daily streamflow, as well as sediment, nitrate and mineral phosphorous within recommended accuracy in most cases. The one-to-one match between farm fields and HRUs created and used in this study is a first step in performing LULC change, climate change impact, and other analyses in a more spatially explicit manner.

  8. Multi-source feature extraction and target recognition in wireless sensor networks based on adaptive distributed wavelet compression algorithms

    NASA Astrophysics Data System (ADS)

    Hortos, William S.

    2008-04-01

    Proposed distributed wavelet-based algorithms are a means to compress sensor data received at the nodes forming a wireless sensor network (WSN) by exchanging information between neighboring sensor nodes. Local collaboration among nodes compacts the measurements, yielding a reduced fused set with equivalent information at far fewer nodes. Nodes may be equipped with multiple sensor types, each capable of sensing distinct phenomena: thermal, humidity, chemical, voltage, or image signals with low or no frequency content as well as audio, seismic or video signals within defined frequency ranges. Compression of the multi-source data through wavelet-based methods, distributed at active nodes, reduces downstream processing and storage requirements along the paths to sink nodes; it also enables noise suppression and more energy-efficient query routing within the WSN. Targets are first detected by the multiple sensors; then wavelet compression and data fusion are applied to the target returns, followed by feature extraction from the reduced data; feature data are input to target recognition/classification routines; targets are tracked during their sojourns through the area monitored by the WSN. Algorithms to perform these tasks are implemented in a distributed manner, based on a partition of the WSN into clusters of nodes. In this work, a scheme of collaborative processing is applied for hierarchical data aggregation and decorrelation, based on the sensor data itself and any redundant information, enabled by a distributed, in-cluster wavelet transform with lifting that allows multiple levels of resolution. The wavelet-based compression algorithm significantly decreases RF bandwidth and other resource use in target processing tasks. Following wavelet compression, features are extracted. The objective of feature extraction is to maximize the probabilities of correct target classification based on multi-source sensor measurements, while minimizing the resource expenditures at participating nodes. Therefore, the feature-extraction method based on the Haar DWT is presented that employs a maximum-entropy measure to determine significant wavelet coefficients. Features are formed by calculating the energy of coefficients grouped around the competing clusters. A DWT-based feature extraction algorithm used for vehicle classification in WSNs can be enhanced by an added rule for selecting the optimal number of resolution levels to improve the correct classification rate and reduce energy consumption expended in local algorithm computations. Published field trial data for vehicular ground targets, measured with multiple sensor types, are used to evaluate the wavelet-assisted algorithms. Extracted features are used in established target recognition routines, e.g., the Bayesian minimum-error-rate classifier, to compare the effects on the classification performance of the wavelet compression. Simulations of feature sets and recognition routines at different resolution levels in target scenarios indicate the impact on classification rates, while formulas are provided to estimate reduction in resource use due to distributed compression.

  9. Peak-Seeking Control Using Gradient and Hessian Estimates

    NASA Technical Reports Server (NTRS)

    Ryan, John J.; Speyer, Jason L.

    2010-01-01

    A peak-seeking control method is presented which utilizes a linear time-varying Kalman filter. Performance function coordinate and magnitude measurements are used by the Kalman filter to estimate the gradient and Hessian of the performance function. The gradient and Hessian are used to command the system toward a local extremum. The method is naturally applied to multiple-input multiple-output systems. Applications of this technique to a single-input single-output example and a two-input one-output example are presented.

  10. Modal control of an oblique wing aircraft

    NASA Technical Reports Server (NTRS)

    Phillips, James D.

    1989-01-01

    A linear modal control algorithm is applied to the NASA Oblique Wing Research Aircraft (OWRA). The control law is evaluated using a detailed nonlinear flight simulation. It is shown that the modal control law attenuates the coupling and nonlinear aerodynamics of the oblique wing and remains stable during control saturation caused by large command inputs or large external disturbances. The technique controls each natural mode independently allowing single-input/single-output techniques to be applied to multiple-input/multiple-output systems.

  11. Low-Rank Discriminant Embedding for Multiview Learning.

    PubMed

    Li, Jingjing; Wu, Yue; Zhao, Jidong; Lu, Ke

    2017-11-01

    This paper focuses on the specific problem of multiview learning where samples have the same feature set but different probability distributions, e.g., different viewpoints or different modalities. Since samples lying in different distributions cannot be compared directly, this paper aims to learn a latent subspace shared by multiple views assuming that the input views are generated from this latent subspace. Previous approaches usually learn the common subspace by either maximizing the empirical likelihood, or preserving the geometric structure. However, considering the complementarity between the two objectives, this paper proposes a novel approach, named low-rank discriminant embedding (LRDE), for multiview learning by taking full advantage of both sides. By further considering the duality between data points and features of multiview scene, i.e., data points can be grouped based on their distribution on features, while features can be grouped based on their distribution on the data points, LRDE not only deploys low-rank constraints on both sample level and feature level to dig out the shared factors across different views, but also preserves geometric information in both the ambient sample space and the embedding feature space by designing a novel graph structure under the framework of graph embedding. Finally, LRDE jointly optimizes low-rank representation and graph embedding in a unified framework. Comprehensive experiments in both multiview manner and pairwise manner demonstrate that LRDE performs much better than previous approaches proposed in recent literatures.

  12. Artificial Neural Network Based Fault Diagnostics of Rolling Element Bearings Using Time-Domain Features

    NASA Astrophysics Data System (ADS)

    Samanta, B.; Al-Balushi, K. R.

    2003-03-01

    A procedure is presented for fault diagnosis of rolling element bearings through artificial neural network (ANN). The characteristic features of time-domain vibration signals of the rotating machinery with normal and defective bearings have been used as inputs to the ANN consisting of input, hidden and output layers. The features are obtained from direct processing of the signal segments using very simple preprocessing. The input layer consists of five nodes, one each for root mean square, variance, skewness, kurtosis and normalised sixth central moment of the time-domain vibration signals. The inputs are normalised in the range of 0.0 and 1.0 except for the skewness which is normalised between -1.0 and 1.0. The output layer consists of two binary nodes indicating the status of the machine—normal or defective bearings. Two hidden layers with different number of neurons have been used. The ANN is trained using backpropagation algorithm with a subset of the experimental data for known machine conditions. The ANN is tested using the remaining set of data. The effects of some preprocessing techniques like high-pass, band-pass filtration, envelope detection (demodulation) and wavelet transform of the vibration signals, prior to feature extraction, are also studied. The results show the effectiveness of the ANN in diagnosis of the machine condition. The proposed procedure requires only a few features extracted from the measured vibration data either directly or with simple preprocessing. The reduced number of inputs leads to faster training requiring far less iterations making the procedure suitable for on-line condition monitoring and diagnostics of machines.

  13. Design of Robust Controllers for a Multiple Input-Multiple Output Control System with Uncertain Parameters Application to the Lateral and Longitudinal Modes of the KC-135 Transport Aircraft

    DTIC Science & Technology

    1984-12-01

    input/output relationship. These are obtained from the design specifications (10:68i-684). Note that the first digit of the subscript of bkj refers...to the output and the second digit to the input. Thus, bkj is.a function of the response requirements on the output, Yk’ due to the input, r.. 169 . A...NXPMAX pNYPMAX, IPLOT) C C C* LIBARY OF PLOT SUBR(OUTINES PSNTCT NLIEPRINTER ONLY~ C* C C C SUP’ LPLOTS C C C DIMENSION IXY(101,71)918UF(100) COMMON /HOPY

  14. Phonological Feature Re-Assembly and the Importance of Phonetic Cues

    ERIC Educational Resources Information Center

    Archibald, John

    2009-01-01

    It is argued that new phonological features can be acquired in second languages, but that both feature acquisition and feature re-assembly are affected by the robustness of phonetic cues in the input.

  15. Efficient RNA structure comparison algorithms.

    PubMed

    Arslan, Abdullah N; Anandan, Jithendar; Fry, Eric; Monschke, Keith; Ganneboina, Nitin; Bowerman, Jason

    2017-12-01

    Recently proposed relative addressing-based ([Formula: see text]) RNA secondary structure representation has important features by which an RNA structure database can be stored into a suffix array. A fast substructure search algorithm has been proposed based on binary search on this suffix array. Using this substructure search algorithm, we present a fast algorithm that finds the largest common substructure of given multiple RNA structures in [Formula: see text] format. The multiple RNA structure comparison problem is NP-hard in its general formulation. We introduced a new problem for comparing multiple RNA structures. This problem has more strict similarity definition and objective, and we propose an algorithm that solves this problem efficiently. We also develop another comparison algorithm that iteratively calls this algorithm to locate nonoverlapping large common substructures in compared RNAs. With the new resulting tools, we improved the RNASSAC website (linked from http://faculty.tamuc.edu/aarslan ). This website now also includes two drawing tools: one specialized for preparing RNA substructures that can be used as input by the search tool, and another one for automatically drawing the entire RNA structure from a given structure sequence.

  16. Evidence for Working Memory Storage Operations in Perceptual Cortex

    PubMed Central

    Sreenivasan, Kartik K.; Gratton, Caterina; Vytlacil, Jason; D’Esposito, Mark

    2014-01-01

    Isolating the short-term storage component of working memory (WM) from the myriad of associated executive processes has been an enduring challenge. Recent efforts have identified patterns of activity in visual regions that contain information about items being held in WM. However, it remains unclear (i) whether these representations withstand intervening sensory input and (ii) how communication between multimodal association cortex and unimodal perceptual regions supporting WM representations is involved in WM storage. We present evidence that the features of a face held in WM are stored within face processing regions, that these representations persist across subsequent sensory input, and that information about the match between sensory input and memory representation is relayed forward from perceptual to prefrontal regions. Participants were presented with a series of probe faces and indicated whether each probe matched a Target face held in WM. We parametrically varied the feature similarity between probe and Target faces. Activity within face processing regions scaled linearly with the degree of feature similarity between the probe face and the features of the Target face, suggesting that the features of the Target face were stored in these regions. Furthermore, directed connectivity measures revealed that the direction of information flow that was optimal for performance was from sensory regions that stored the features of the Target face to dorsal prefrontal regions, supporting the notion that sensory input is compared to representations stored within perceptual regions and relayed forward. Together, these findings indicate that WM storage operations are carried out within perceptual cortex. PMID:24436009

  17. Ergodic channel capacity of spatial correlated multiple-input multiple-output free space optical links using multipulse pulse-position modulation

    NASA Astrophysics Data System (ADS)

    Wang, Huiqin; Wang, Xue; Cao, Minghua

    2017-02-01

    The spatial correlation extensively exists in the multiple-input multiple-output (MIMO) free space optical (FSO) communication systems due to the channel fading and the antenna space limitation. Wilkinson's method was utilized to investigate the impact of spatial correlation on the MIMO FSO communication system employing multipulse pulse-position modulation. Simulation results show that the existence of spatial correlation reduces the ergodic channel capacity, and the reception diversity is more competent to resist this kind of performance degradation.

  18. Overview of multi-input frequency domain modal testing methods with an emphasis on sine testing

    NASA Technical Reports Server (NTRS)

    Rost, Robert W.; Brown, David L.

    1988-01-01

    An overview of the current state of the art multiple-input, multiple-output modal testing technology is discussed. A very brief review of the current time domain methods is given. A detailed review of frequency and spatial domain methods is presented with an emphasis on sine testing.

  19. Economies of Scale and Scope in Australian Higher Education

    ERIC Educational Resources Information Center

    Worthington, A. C.; Higgs, H.

    2011-01-01

    This paper estimates economies of scale and scope for 36 Australian universities using a multiple-input, multiple-output cost function over the period 1998-2006. The three inputs included in the analysis are full-time equivalent academic and non-academic staff and physical capital. The five outputs are undergraduate, postgraduate and PhD…

  20. Logarithmic r-θ mapping for hybrid optical neural network filter for multiple objects recognition within cluttered scenes

    NASA Astrophysics Data System (ADS)

    Kypraios, Ioannis; Young, Rupert C. D.; Chatwin, Chris R.; Birch, Phil M.

    2009-04-01

    θThe window unit in the design of the complex logarithmic r-θ mapping for hybrid optical neural network filter can allow multiple objects of the same class to be detected within the input image. Additionally, the architecture of the neural network unit of the complex logarithmic r-θ mapping for hybrid optical neural network filter becomes attractive for accommodating the recognition of multiple objects of different classes within the input image by modifying the output layer of the unit. We test the overall filter for multiple objects of the same and of different classes' recognition within cluttered input images and video sequences of cluttered scenes. Logarithmic r-θ mapping for hybrid optical neural network filter is shown to exhibit with a single pass over the input data simultaneously in-plane rotation, out-of-plane rotation, scale, log r-θ map translation and shift invariance, and good clutter tolerance by recognizing correctly the different objects within the cluttered scenes. We record in our results additional extracted information from the cluttered scenes about the objects' relative position, scale and in-plane rotation.

  1. Multi-Target Regression via Robust Low-Rank Learning.

    PubMed

    Zhen, Xiantong; Yu, Mengyang; He, Xiaofei; Li, Shuo

    2018-02-01

    Multi-target regression has recently regained great popularity due to its capability of simultaneously learning multiple relevant regression tasks and its wide applications in data mining, computer vision and medical image analysis, while great challenges arise from jointly handling inter-target correlations and input-output relationships. In this paper, we propose Multi-layer Multi-target Regression (MMR) which enables simultaneously modeling intrinsic inter-target correlations and nonlinear input-output relationships in a general framework via robust low-rank learning. Specifically, the MMR can explicitly encode inter-target correlations in a structure matrix by matrix elastic nets (MEN); the MMR can work in conjunction with the kernel trick to effectively disentangle highly complex nonlinear input-output relationships; the MMR can be efficiently solved by a new alternating optimization algorithm with guaranteed convergence. The MMR leverages the strength of kernel methods for nonlinear feature learning and the structural advantage of multi-layer learning architectures for inter-target correlation modeling. More importantly, it offers a new multi-layer learning paradigm for multi-target regression which is endowed with high generality, flexibility and expressive ability. Extensive experimental evaluation on 18 diverse real-world datasets demonstrates that our MMR can achieve consistently high performance and outperforms representative state-of-the-art algorithms, which shows its great effectiveness and generality for multivariate prediction.

  2. Presentation planning using an integrated knowledge base

    NASA Technical Reports Server (NTRS)

    Arens, Yigal; Miller, Lawrence; Sondheimer, Norman

    1988-01-01

    A description is given of user interface research aimed at bringing together multiple input and output modes in a way that handles mixed mode input (commands, menus, forms, natural language), interacts with a diverse collection of underlying software utilities in a uniform way, and presents the results through a combination of output modes including natural language text, maps, charts and graphs. The system, Integrated Interfaces, derives much of its ability to interact uniformly with the user and the underlying services and to build its presentations, from the information present in a central knowledge base. This knowledge base integrates models of the application domain (Navy ships in the Pacific region, in the current demonstration version); the structure of visual displays and their graphical features; the underlying services (data bases and expert systems); and interface functions. The emphasis is on a presentation planner that uses the knowledge base to produce multi-modal output. There has been a flurry of recent work in user interface management systems. (Several recent examples are listed in the references). Existing work is characterized by an attempt to relieve the software designer of the burden of handcrafting an interface for each application. The work has generally focused on intelligently handling input. This paper deals with the other end of the pipeline - presentations.

  3. Multimodal switching of conformation and solubility in homocysteine derived polypeptides.

    PubMed

    Kramer, Jessica R; Deming, Timothy J

    2014-04-16

    We report the design and synthesis of poly(S-alkyl-L-homocysteine)s, which were found to be a new class of readily prepared, multiresponsive polymers that possess the unprecedented ability to respond in different ways to different stimuli, either through a change in chain conformation or in water solubility. The responsive properties of these materials are also effected under mild conditions and are completely reversible for all pathways. The key components of these polymers are the incorporation of water solubilizing alkyl functional groups that are integrated with precisely positioned, multiresponsive thioether linkages. This promising system allows multimodal switching of polypeptide properties to obtain desirable features, such as coupled responses to multiple external inputs.

  4. Security authentication with a three-dimensional optical phase code using random forest classifier: an overview

    NASA Astrophysics Data System (ADS)

    Markman, Adam; Carnicer, Artur; Javidi, Bahram

    2017-05-01

    We overview our recent work [1] on utilizing three-dimensional (3D) optical phase codes for object authentication using the random forest classifier. A simple 3D optical phase code (OPC) is generated by combining multiple diffusers and glass slides. This tag is then placed on a quick-response (QR) code, which is a barcode capable of storing information and can be scanned under non-uniform illumination conditions, rotation, and slight degradation. A coherent light source illuminates the OPC and the transmitted light is captured by a CCD to record the unique signature. Feature extraction on the signature is performed and inputted into a pre-trained random-forest classifier for authentication.

  5. Gaze and Feet as Additional Input Modalities for Interacting with Geospatial Interfaces

    NASA Astrophysics Data System (ADS)

    Çöltekin, A.; Hempel, J.; Brychtova, A.; Giannopoulos, I.; Stellmach, S.; Dachselt, R.

    2016-06-01

    Geographic Information Systems (GIS) are complex software environments and we often work with multiple tasks and multiple displays when we work with GIS. However, user input is still limited to mouse and keyboard in most workplace settings. In this project, we demonstrate how the use of gaze and feet as additional input modalities can overcome time-consuming and annoying mode switches between frequently performed tasks. In an iterative design process, we developed gaze- and foot-based methods for zooming and panning of map visualizations. We first collected appropriate gestures in a preliminary user study with a small group of experts, and designed two interaction concepts based on their input. After the implementation, we evaluated the two concepts comparatively in another user study to identify strengths and shortcomings in both. We found that continuous foot input combined with implicit gaze input is promising for supportive tasks.

  6. Nonlinear features for product inspection

    NASA Astrophysics Data System (ADS)

    Talukder, Ashit; Casasent, David P.

    1999-03-01

    Classification of real-time X-ray images of randomly oriented touching pistachio nuts is discussed. The ultimate objective is the development of a system for automated non-invasive detection of defective product items on a conveyor belt. We discuss the extraction of new features that allow better discrimination between damaged and clean items (pistachio nuts). This feature extraction and classification stage is the new aspect of this paper; our new maximum representation and discriminating feature (MRDF) extraction method computes nonlinear features that are used as inputs to a new modified k nearest neighbor classifier. In this work, the MRDF is applied to standard features (rather than iconic data). The MRDF is robust to various probability distributions of the input class and is shown to provide good classification and new ROC (receiver operating characteristic) data.

  7. Data matching for free-surface multiple attenuation by multidimensional deconvolution

    NASA Astrophysics Data System (ADS)

    van der Neut, Joost; Frijlink, Martijn; van Borselen, Roald

    2012-09-01

    A common strategy for surface-related multiple elimination of seismic data is to predict multiples by a convolutional model and subtract these adaptively from the input gathers. Problems can be posed by interfering multiples and primaries. Removing multiples by multidimensional deconvolution (MDD) (inversion) does not suffer from these problems. However, this approach requires data to be consistent, which is often not the case, especially not at interpolated near-offsets. A novel method is proposed to improve data consistency prior to inversion. This is done by backpropagating first-order multiples with a time-gated reference primary event and matching these with early primaries in the input gather. After data matching, multiple elimination by MDD can be applied with a deterministic inversion scheme.

  8. Posterior Inferotemporal Cortex Cells Use Multiple Input Pathways for Shape Encoding.

    PubMed

    Ponce, Carlos R; Lomber, Stephen G; Livingstone, Margaret S

    2017-05-10

    In the macaque monkey brain, posterior inferior temporal (PIT) cortex cells contribute to visual object recognition. They receive concurrent inputs from visual areas V4, V3, and V2. We asked how these different anatomical pathways shape PIT response properties by deactivating them while monitoring PIT activity in two male macaques. We found that cooling of V4 or V2|3 did not lead to consistent changes in population excitatory drive; however, population pattern analyses showed that V4-based pathways were more important than V2|3-based pathways. We did not find any image features that predicted decoding accuracy differences between both interventions. Using the HMAX hierarchical model of visual recognition, we found that different groups of simulated "PIT" units with different input histories (lacking "V2|3" or "V4" input) allowed for comparable levels of object-decoding performance and that removing a large fraction of "PIT" activity resulted in similar drops in performance as in the cooling experiments. We conclude that distinct input pathways to PIT relay similar types of shape information, with V1-dependent V4 cells providing more quantitatively useful information for overall encoding than cells in V2 projecting directly to PIT. SIGNIFICANCE STATEMENT Convolutional neural networks are the best models of the visual system, but most emphasize input transformations across a serial hierarchy akin to the primary "ventral stream" (V1 → V2 → V4 → IT). However, the ventral stream also comprises parallel "bypass" pathways: V1 also connects to V4, and V2 to IT. To explore the advantages of mixing long and short pathways in the macaque brain, we used cortical cooling to silence inputs to posterior IT and compared the findings with an HMAX model with parallel pathways. Copyright © 2017 the authors 0270-6474/17/375019-16$15.00/0.

  9. Posterior Inferotemporal Cortex Cells Use Multiple Input Pathways for Shape Encoding

    PubMed Central

    2017-01-01

    In the macaque monkey brain, posterior inferior temporal (PIT) cortex cells contribute to visual object recognition. They receive concurrent inputs from visual areas V4, V3, and V2. We asked how these different anatomical pathways shape PIT response properties by deactivating them while monitoring PIT activity in two male macaques. We found that cooling of V4 or V2|3 did not lead to consistent changes in population excitatory drive; however, population pattern analyses showed that V4-based pathways were more important than V2|3-based pathways. We did not find any image features that predicted decoding accuracy differences between both interventions. Using the HMAX hierarchical model of visual recognition, we found that different groups of simulated “PIT” units with different input histories (lacking “V2|3” or “V4” input) allowed for comparable levels of object-decoding performance and that removing a large fraction of “PIT” activity resulted in similar drops in performance as in the cooling experiments. We conclude that distinct input pathways to PIT relay similar types of shape information, with V1-dependent V4 cells providing more quantitatively useful information for overall encoding than cells in V2 projecting directly to PIT. SIGNIFICANCE STATEMENT Convolutional neural networks are the best models of the visual system, but most emphasize input transformations across a serial hierarchy akin to the primary “ventral stream” (V1 → V2 → V4 → IT). However, the ventral stream also comprises parallel “bypass” pathways: V1 also connects to V4, and V2 to IT. To explore the advantages of mixing long and short pathways in the macaque brain, we used cortical cooling to silence inputs to posterior IT and compared the findings with an HMAX model with parallel pathways. PMID:28416597

  10. Ionospheric Tomography Using Faraday Rotation of Automatic Dependant Surveillance Broadcast UHF Signals

    NASA Astrophysics Data System (ADS)

    Cushley, A. C.

    2013-12-01

    The proposed launch of a satellite carrying the first space-borne ADS-B receiver by the Royal Military College of Canada (RMCC) will create a unique opportunity to study the modification of the 1090 MHz radio waves following propagation through the ionosphere from the transmitting aircraft to the passive satellite receiver(s). Experimental work successfully demonstrated that ADS-B data can be used to reconstruct two dimensional (2D) electron density maps of the ionosphere using computerized tomography (CT). The goal of this work is to evaluate the feasibility of CT reconstruction. The data is modelled using Ray-tracing techniques. This allows us to determine the characteristics of individual waves, including the wave path and the state of polarization at the satellite receiver. The modelled Faraday rotation (FR) is determined and converted to total electron content (TEC) along the ray-paths. The resulting TEC is used as input for computerized ionospheric tomography (CIT) using algebraic reconstruction technique (ART). This study concentrated on meso-scale structures 100-1000 km in horizontal extent. The primary scientific interest of this thesis was to show the feasibility of a new method to image the ionosphere and obtain a better understanding of magneto-ionic wave propagation. Multiple feature input electron density profile to ray-tracing program. Top: reconstructed relative electron density map of ray-trace input (Fig. 1) using TEC measurements and line-of-sight path. Bottom: reconstructed electron density map of ray-trace input using quiet background a priori estimate.

  11. COSP for Windows: Strategies for Rapid Analyses of Cyclic Oxidation Behavior

    NASA Technical Reports Server (NTRS)

    Smialek, James L.; Auping, Judith V.

    2002-01-01

    COSP is a publicly available computer program that models the cyclic oxidation weight gain and spallation process. Inputs to the model include the selection of an oxidation growth law and a spalling geometry, plus oxide phase, growth rate, spall constant, and cycle duration parameters. Output includes weight change, the amounts of retained and spalled oxide, the total oxygen and metal consumed, and the terminal rates of weight loss and metal consumption. The present version is Windows based and can accordingly be operated conveniently while other applications remain open for importing experimental weight change data, storing model output data, or plotting model curves. Point-and-click operating features include multiple drop-down menus for input parameters, data importing, and quick, on-screen plots showing one selection of the six output parameters for up to 10 models. A run summary text lists various characteristic parameters that are helpful in describing cyclic behavior, such as the maximum weight change, the number of cycles to reach the maximum weight gain or zero weight change, the ratio of these, and the final rate of weight loss. The program includes save and print options as well as a help file. Families of model curves readily show the sensitivity to various input parameters. The cyclic behaviors of nickel aluminide (NiAl) and a complex superalloy are shown to be properly fitted by model curves. However, caution is always advised regarding the uniqueness claimed for any specific set of input parameters,

  12. Binary power multiplier for electromagnetic energy

    DOEpatents

    Farkas, Zoltan D.

    1988-01-01

    A technique for converting electromagnetic pulses to higher power amplitude and shorter duration, in binary multiples, splits an input pulse into two channels, and subjects the pulses in the two channels to a number of binary pulse compression operations. Each pulse compression operation entails combining the pulses in both input channels and selectively steering the combined power to one output channel during the leading half of the pulses and to the other output channel during the trailing half of the pulses, and then delaying the pulse in the first output channel by an amount equal to half the initial pulse duration. Apparatus for carrying out each of the binary multiplication operation preferably includes a four-port coupler (such as a 3 dB hybrid), which operates on power inputs at a pair of input ports by directing the combined power to either of a pair of output ports, depending on the relative phase of the inputs. Therefore, by appropriately phase coding the pulses prior to any of the pulse compression stages, the entire pulse compression (with associated binary power multiplication) can be carried out solely with passive elements.

  13. Self-supervised ARTMAP.

    PubMed

    Amis, Gregory P; Carpenter, Gail A

    2010-03-01

    Computational models of learning typically train on labeled input patterns (supervised learning), unlabeled input patterns (unsupervised learning), or a combination of the two (semi-supervised learning). In each case input patterns have a fixed number of features throughout training and testing. Human and machine learning contexts present additional opportunities for expanding incomplete knowledge from formal training, via self-directed learning that incorporates features not previously experienced. This article defines a new self-supervised learning paradigm to address these richer learning contexts, introducing a neural network called self-supervised ARTMAP. Self-supervised learning integrates knowledge from a teacher (labeled patterns with some features), knowledge from the environment (unlabeled patterns with more features), and knowledge from internal model activation (self-labeled patterns). Self-supervised ARTMAP learns about novel features from unlabeled patterns without destroying partial knowledge previously acquired from labeled patterns. A category selection function bases system predictions on known features, and distributed network activation scales unlabeled learning to prediction confidence. Slow distributed learning on unlabeled patterns focuses on novel features and confident predictions, defining classification boundaries that were ambiguous in the labeled patterns. Self-supervised ARTMAP improves test accuracy on illustrative low-dimensional problems and on high-dimensional benchmarks. Model code and benchmark data are available from: http://techlab.eu.edu/SSART/. Copyright 2009 Elsevier Ltd. All rights reserved.

  14. Image feature detection and extraction techniques performance evaluation for development of panorama under different light conditions

    NASA Astrophysics Data System (ADS)

    Patil, Venkat P.; Gohatre, Umakant B.

    2018-04-01

    The technique of obtaining a wider field-of-view of an image to get high resolution integrated image is normally required for development of panorama of a photographic images or scene from a sequence of part of multiple views. There are various image stitching methods developed recently. For image stitching five basic steps are adopted stitching which are Feature detection and extraction, Image registration, computing homography, image warping and Blending. This paper provides review of some of the existing available image feature detection and extraction techniques and image stitching algorithms by categorizing them into several methods. For each category, the basic concepts are first described and later on the necessary modifications made to the fundamental concepts by different researchers are elaborated. This paper also highlights about the some of the fundamental techniques for the process of photographic image feature detection and extraction methods under various illumination conditions. The Importance of Image stitching is applicable in the various fields such as medical imaging, astrophotography and computer vision. For comparing performance evaluation of the techniques used for image features detection three methods are considered i.e. ORB, SURF, HESSIAN and time required for input images feature detection is measured. Results obtained finally concludes that for daylight condition, ORB algorithm found better due to the fact that less tome is required for more features extracted where as for images under night light condition it shows that SURF detector performs better than ORB/HESSIAN detectors.

  15. Active Vibration Control for Helicopter Interior Noise Reduction Using Power Minimization

    NASA Technical Reports Server (NTRS)

    Mendoza, J.; Chevva, K.; Sun, F.; Blanc, A.; Kim, S. B.

    2014-01-01

    This report describes work performed by United Technologies Research Center (UTRC) for NASA Langley Research Center (LaRC) under Contract NNL11AA06C. The objective of this program is to develop technology to reduce helicopter interior noise resulting from multiple gear meshing frequencies. A novel active vibration control approach called Minimum Actuation Power (MAP) is developed. MAP is an optimal control strategy that minimizes the total input power into a structure by monitoring and varying the input power of controlling sources. MAP control was implemented without explicit knowledge of the phasing and magnitude of the excitation sources by driving the real part of the input power from the controlling sources to zero. It is shown that this occurs when the total mechanical input power from the excitation and controlling sources is a minimum. MAP theory is developed for multiple excitation sources with arbitrary relative phasing for single or multiple discrete frequencies and controlled by a single or multiple controlling sources. Simulations and experimental results demonstrate the feasibility of MAP for structural vibration reduction of a realistic rotorcraft interior structure. MAP control resulted in significant average global vibration reduction of a single frequency and multiple frequency excitations with one controlling actuator. Simulations also demonstrate the potential effectiveness of the observed vibration reductions on interior radiated noise.

  16. [Severity classification of chronic obstructive pulmonary disease based on deep learning].

    PubMed

    Ying, Jun; Yang, Ceyuan; Li, Quanzheng; Xue, Wanguo; Li, Tanshi; Cao, Wenzhe

    2017-12-01

    In this paper, a deep learning method has been raised to build an automatic classification algorithm of severity of chronic obstructive pulmonary disease. Large sample clinical data as input feature were analyzed for their weights in classification. Through feature selection, model training, parameter optimization and model testing, a classification prediction model based on deep belief network was built to predict severity classification criteria raised by the Global Initiative for Chronic Obstructive Lung Disease (GOLD). We get accuracy over 90% in prediction for two different standardized versions of severity criteria raised in 2007 and 2011 respectively. Moreover, we also got the contribution ranking of different input features through analyzing the model coefficient matrix and confirmed that there was a certain degree of agreement between the more contributive input features and the clinical diagnostic knowledge. The validity of the deep belief network model was proved by this result. This study provides an effective solution for the application of deep learning method in automatic diagnostic decision making.

  17. Dynamic modal estimation using instrumental variables

    NASA Technical Reports Server (NTRS)

    Salzwedel, H.

    1980-01-01

    A method to determine the modes of dynamical systems is described. The inputs and outputs of a system are Fourier transformed and averaged to reduce the error level. An instrumental variable method that estimates modal parameters from multiple correlations between responses of single input, multiple output systems is applied to estimate aircraft, spacecraft, and off-shore platform modal parameters.

  18. Statistical linearization for multi-input/multi-output nonlinearities

    NASA Technical Reports Server (NTRS)

    Lin, Ching-An; Cheng, Victor H. L.

    1991-01-01

    Formulas are derived for the computation of the random input-describing functions for MIMO nonlinearities; these straightforward and rigorous derivations are based on the optimal mean square linear approximation. The computations involve evaluations of multiple integrals. It is shown that, for certain classes of nonlinearities, multiple-integral evaluations are obviated and the computations are significantly simplified.

  19. A control system for a powered prosthesis using positional and myoelectric inputs from the shoulder complex.

    PubMed

    Losier, Y; Englehart, K; Hudgins, B

    2007-01-01

    The integration of multiple input sources within a control strategy for powered upper limb prostheses could provide smoother, more intuitive multi-joint reaching movements based on the user's intended motion. The work presented in this paper presents the results of using myoelectric signals (MES) of the shoulder area in combination with the position of the shoulder as input sources to multiple linear discriminant analysis classifiers. Such an approach may provide users with control signals capable of controlling three degrees of freedom (DOF). This work is another important step in the development of hybrid systems that will enable simultaneous control of multiple degrees of freedom used for reaching tasks in a prosthetic limb.

  20. New multirate sampled-data control law structure and synthesis algorithm

    NASA Technical Reports Server (NTRS)

    Berg, Martin C.; Mason, Gregory S.; Yang, Gen-Sheng

    1992-01-01

    A new multirate sampled-data control law structure is defined and a new parameter-optimization-based synthesis algorithm for that structure is introduced. The synthesis algorithm can be applied to multirate, multiple-input/multiple-output, sampled-data control laws having a prescribed dynamic order and structure, and a priori specified sampling/update rates for all sensors, processor states, and control inputs. The synthesis algorithm is applied to design two-input, two-output tip position controllers of various dynamic orders for a sixth-order, two-link robot arm model.

  1. Robust decentralized controller for minimizing coupling effect in single inductor multiple output DC-DC converter operating in continuous conduction mode.

    PubMed

    Medeiros, Renan Landau Paiva de; Barra, Walter; Bessa, Iury Valente de; Chaves Filho, João Edgar; Ayres, Florindo Antonio de Cavalho; Neves, Cleonor Crescêncio das

    2018-02-01

    This paper describes a novel robust decentralized control design methodology for a single inductor multiple output (SIMO) DC-DC converter. Based on a nominal multiple input multiple output (MIMO) plant model and performance requirements, a pairing input-output analysis is performed to select the suitable input to control each output aiming to attenuate the loop coupling. Thus, the plant uncertainty limits are selected and expressed in interval form with parameter values of the plant model. A single inductor dual output (SIDO) DC-DC buck converter board is developed for experimental tests. The experimental results show that the proposed methodology can maintain a desirable performance even in the presence of parametric uncertainties. Furthermore, the performance indexes calculated from experimental data show that the proposed methodology outperforms classical MIMO control techniques. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  2. An Energy Efficient Cooperative Hierarchical MIMO Clustering Scheme for Wireless Sensor Networks

    PubMed Central

    Nasim, Mehwish; Qaisar, Saad; Lee, Sungyoung

    2012-01-01

    In this work, we present an energy efficient hierarchical cooperative clustering scheme for wireless sensor networks. Communication cost is a crucial factor in depleting the energy of sensor nodes. In the proposed scheme, nodes cooperate to form clusters at each level of network hierarchy ensuring maximal coverage and minimal energy expenditure with relatively uniform distribution of load within the network. Performance is enhanced by cooperative multiple-input multiple-output (MIMO) communication ensuring energy efficiency for WSN deployments over large geographical areas. We test our scheme using TOSSIM and compare the proposed scheme with cooperative multiple-input multiple-output (CMIMO) clustering scheme and traditional multihop Single-Input-Single-Output (SISO) routing approach. Performance is evaluated on the basis of number of clusters, number of hops, energy consumption and network lifetime. Experimental results show significant energy conservation and increase in network lifetime as compared to existing schemes. PMID:22368459

  3. Utilizing Hierarchical Clustering to improve Efficiency of Self-Organizing Feature Map to Identify Hydrological Homogeneous Regions

    NASA Astrophysics Data System (ADS)

    Farsadnia, Farhad; Ghahreman, Bijan

    2016-04-01

    Hydrologic homogeneous group identification is considered both fundamental and applied research in hydrology. Clustering methods are among conventional methods to assess the hydrological homogeneous regions. Recently, Self-Organizing feature Map (SOM) method has been applied in some studies. However, the main problem of this method is the interpretation on the output map of this approach. Therefore, SOM is used as input to other clustering algorithms. The aim of this study is to apply a two-level Self-Organizing feature map and Ward hierarchical clustering method to determine the hydrologic homogenous regions in North and Razavi Khorasan provinces. At first by principal component analysis, we reduced SOM input matrix dimension, then the SOM was used to form a two-dimensional features map. To determine homogeneous regions for flood frequency analysis, SOM output nodes were used as input into the Ward method. Generally, the regions identified by the clustering algorithms are not statistically homogeneous. Consequently, they have to be adjusted to improve their homogeneity. After adjustment of the homogeneity regions by L-moment tests, five hydrologic homogeneous regions were identified. Finally, adjusted regions were created by a two-level SOM and then the best regional distribution function and associated parameters were selected by the L-moment approach. The results showed that the combination of self-organizing maps and Ward hierarchical clustering by principal components as input is more effective than the hierarchical method, by principal components or standardized inputs to achieve hydrologic homogeneous regions.

  4. Information Extraction for Clinical Data Mining: A Mammography Case Study

    PubMed Central

    Nassif, Houssam; Woods, Ryan; Burnside, Elizabeth; Ayvaci, Mehmet; Shavlik, Jude; Page, David

    2013-01-01

    Breast cancer is the leading cause of cancer mortality in women between the ages of 15 and 54. During mammography screening, radiologists use a strict lexicon (BI-RADS) to describe and report their findings. Mammography records are then stored in a well-defined database format (NMD). Lately, researchers have applied data mining and machine learning techniques to these databases. They successfully built breast cancer classifiers that can help in early detection of malignancy. However, the validity of these models depends on the quality of the underlying databases. Unfortunately, most databases suffer from inconsistencies, missing data, inter-observer variability and inappropriate term usage. In addition, many databases are not compliant with the NMD format and/or solely consist of text reports. BI-RADS feature extraction from free text and consistency checks between recorded predictive variables and text reports are crucial to addressing this problem. We describe a general scheme for concept information retrieval from free text given a lexicon, and present a BI-RADS features extraction algorithm for clinical data mining. It consists of a syntax analyzer, a concept finder and a negation detector. The syntax analyzer preprocesses the input into individual sentences. The concept finder uses a semantic grammar based on the BI-RADS lexicon and the experts’ input. It parses sentences detecting BI-RADS concepts. Once a concept is located, a lexical scanner checks for negation. Our method can handle multiple latent concepts within the text, filtering out ultrasound concepts. On our dataset, our algorithm achieves 97.7% precision, 95.5% recall and an F1-score of 0.97. It outperforms manual feature extraction at the 5% statistical significance level. PMID:23765123

  5. Information Extraction for Clinical Data Mining: A Mammography Case Study.

    PubMed

    Nassif, Houssam; Woods, Ryan; Burnside, Elizabeth; Ayvaci, Mehmet; Shavlik, Jude; Page, David

    2009-01-01

    Breast cancer is the leading cause of cancer mortality in women between the ages of 15 and 54. During mammography screening, radiologists use a strict lexicon (BI-RADS) to describe and report their findings. Mammography records are then stored in a well-defined database format (NMD). Lately, researchers have applied data mining and machine learning techniques to these databases. They successfully built breast cancer classifiers that can help in early detection of malignancy. However, the validity of these models depends on the quality of the underlying databases. Unfortunately, most databases suffer from inconsistencies, missing data, inter-observer variability and inappropriate term usage. In addition, many databases are not compliant with the NMD format and/or solely consist of text reports. BI-RADS feature extraction from free text and consistency checks between recorded predictive variables and text reports are crucial to addressing this problem. We describe a general scheme for concept information retrieval from free text given a lexicon, and present a BI-RADS features extraction algorithm for clinical data mining. It consists of a syntax analyzer, a concept finder and a negation detector. The syntax analyzer preprocesses the input into individual sentences. The concept finder uses a semantic grammar based on the BI-RADS lexicon and the experts' input. It parses sentences detecting BI-RADS concepts. Once a concept is located, a lexical scanner checks for negation. Our method can handle multiple latent concepts within the text, filtering out ultrasound concepts. On our dataset, our algorithm achieves 97.7% precision, 95.5% recall and an F 1 -score of 0.97. It outperforms manual feature extraction at the 5% statistical significance level.

  6. New nonlinear features for inspection, robotics, and face recognition

    NASA Astrophysics Data System (ADS)

    Casasent, David P.; Talukder, Ashit

    1999-10-01

    Classification of real-time X-ray images of randomly oriented touching pistachio nuts is discussed. The ultimate objective is the development of a system for automated non- invasive detection of defective product items on a conveyor belt. We discuss the extraction of new features that allow better discrimination between damaged and clean items (pistachio nuts). This feature extraction and classification stage is the new aspect of this paper; our new maximum representation and discriminating feature (MRDF) extraction method computes nonlinear features that are used as inputs to a new modified k nearest neighbor classifier. In this work, the MRDF is applied to standard features (rather than iconic data). The MRDF is robust to various probability distributions of the input class and is shown to provide good classification and new ROC (receiver operating characteristic) data. Other applications of these new feature spaces in robotics and face recognition are also noted.

  7. New feature extraction method for classification of agricultural products from x-ray images

    NASA Astrophysics Data System (ADS)

    Talukder, Ashit; Casasent, David P.; Lee, Ha-Woon; Keagy, Pamela M.; Schatzki, Thomas F.

    1999-01-01

    Classification of real-time x-ray images of randomly oriented touching pistachio nuts is discussed. The ultimate objective is the development of a system for automated non- invasive detection of defective product items on a conveyor belt. We discuss the extraction of new features that allow better discrimination between damaged and clean items. This feature extraction and classification stage is the new aspect of this paper; our new maximum representation and discrimination between damaged and clean items. This feature extraction and classification stage is the new aspect of this paper; our new maximum representation and discriminating feature (MRDF) extraction method computes nonlinear features that are used as inputs to a new modified k nearest neighbor classifier. In this work the MRDF is applied to standard features. The MRDF is robust to various probability distributions of the input class and is shown to provide good classification and new ROC data.

  8. Simple, accurate formula for the average bit error probability of multiple-input multiple-output free-space optical links over negative exponential turbulence channels.

    PubMed

    Peppas, Kostas P; Lazarakis, Fotis; Alexandridis, Antonis; Dangakis, Kostas

    2012-08-01

    In this Letter we investigate the error performance of multiple-input multiple-output free-space optical communication systems employing intensity modulation/direct detection and operating over strong atmospheric turbulence channels. Atmospheric-induced strong turbulence fading is modeled using the negative exponential distribution. For the considered system, an approximate yet accurate analytical expression for the average bit error probability is derived and an efficient method for its numerical evaluation is proposed. Numerically evaluated and computer simulation results are further provided to demonstrate the validity of the proposed mathematical analysis.

  9. ChIP-chip versus ChIP-seq: Lessons for experimental design and data analysis

    PubMed Central

    2011-01-01

    Background Chromatin immunoprecipitation (ChIP) followed by microarray hybridization (ChIP-chip) or high-throughput sequencing (ChIP-seq) allows genome-wide discovery of protein-DNA interactions such as transcription factor bindings and histone modifications. Previous reports only compared a small number of profiles, and little has been done to compare histone modification profiles generated by the two technologies or to assess the impact of input DNA libraries in ChIP-seq analysis. Here, we performed a systematic analysis of a modENCODE dataset consisting of 31 pairs of ChIP-chip/ChIP-seq profiles of the coactivator CBP, RNA polymerase II (RNA PolII), and six histone modifications across four developmental stages of Drosophila melanogaster. Results Both technologies produce highly reproducible profiles within each platform, ChIP-seq generally produces profiles with a better signal-to-noise ratio, and allows detection of more peaks and narrower peaks. The set of peaks identified by the two technologies can be significantly different, but the extent to which they differ varies depending on the factor and the analysis algorithm. Importantly, we found that there is a significant variation among multiple sequencing profiles of input DNA libraries and that this variation most likely arises from both differences in experimental condition and sequencing depth. We further show that using an inappropriate input DNA profile can impact the average signal profiles around genomic features and peak calling results, highlighting the importance of having high quality input DNA data for normalization in ChIP-seq analysis. Conclusions Our findings highlight the biases present in each of the platforms, show the variability that can arise from both technology and analysis methods, and emphasize the importance of obtaining high quality and deeply sequenced input DNA libraries for ChIP-seq analysis. PMID:21356108

  10. Deep Learning Based Binaural Speech Separation in Reverberant Environments.

    PubMed

    Zhang, Xueliang; Wang, DeLiang

    2017-05-01

    Speech signal is usually degraded by room reverberation and additive noises in real environments. This paper focuses on separating target speech signal in reverberant conditions from binaural inputs. Binaural separation is formulated as a supervised learning problem, and we employ deep learning to map from both spatial and spectral features to a training target. With binaural inputs, we first apply a fixed beamformer and then extract several spectral features. A new spatial feature is proposed and extracted to complement the spectral features. The training target is the recently suggested ideal ratio mask. Systematic evaluations and comparisons show that the proposed system achieves very good separation performance and substantially outperforms related algorithms under challenging multi-source and reverberant environments.

  11. Object-based detection of vehicles using combined optical and elevation data

    NASA Astrophysics Data System (ADS)

    Schilling, Hendrik; Bulatov, Dimitri; Middelmann, Wolfgang

    2018-02-01

    The detection of vehicles is an important and challenging topic that is relevant for many applications. In this work, we present a workflow that utilizes optical and elevation data to detect vehicles in remotely sensed urban data. This workflow consists of three consecutive stages: candidate identification, classification, and single vehicle extraction. Unlike in most previous approaches, fusion of both data sources is strongly pursued at all stages. While the first stage utilizes the fact that most man-made objects are rectangular in shape, the second and third stages employ machine learning techniques combined with specific features. The stages are designed to handle multiple sensor input, which results in a significant improvement. A detailed evaluation shows the benefits of our workflow, which includes hand-tailored features; even in comparison with classification approaches based on Convolutional Neural Networks, which are state of the art in computer vision, we could obtain a comparable or superior performance (F1 score of 0.96-0.94).

  12. DYNA3D/ParaDyn Regression Test Suite Inventory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Jerry I.

    2016-09-01

    The following table constitutes an initial assessment of feature coverage across the regression test suite used for DYNA3D and ParaDyn. It documents the regression test suite at the time of preliminary release 16.1 in September 2016. The columns of the table represent groupings of functionalities, e.g., material models. Each problem in the test suite is represented by a row in the table. All features exercised by the problem are denoted by a check mark (√) in the corresponding column. The definition of “feature” has not been subdivided to its smallest unit of user input, e.g., algorithmic parameters specific to amore » particular type of contact surface. This represents a judgment to provide code developers and users a reasonable impression of feature coverage without expanding the width of the table by several multiples. All regression testing is run in parallel, typically with eight processors, except problems involving features only available in serial mode. Many are strictly regression tests acting as a check that the codes continue to produce adequately repeatable results as development unfolds; compilers change and platforms are replaced. A subset of the tests represents true verification problems that have been checked against analytical or other benchmark solutions. Users are welcomed to submit documented problems for inclusion in the test suite, especially if they are heavily exercising, and dependent upon, features that are currently underrepresented.« less

  13. The multidimensional Self-Adaptive Grid code, SAGE, version 2

    NASA Technical Reports Server (NTRS)

    Davies, Carol B.; Venkatapathy, Ethiraj

    1995-01-01

    This new report on Version 2 of the SAGE code includes all the information in the original publication plus all upgrades and changes to the SAGE code since that time. The two most significant upgrades are the inclusion of a finite-volume option and the ability to adapt and manipulate zonal-matching multiple-grid files. In addition, the original SAGE code has been upgraded to Version 1.1 and includes all options mentioned in this report, with the exception of the multiple grid option and its associated features. Since Version 2 is a larger and more complex code, it is suggested (but not required) that Version 1.1 be used for single-grid applications. This document contains all the information required to run both versions of SAGE. The formulation of the adaption method is described in the first section of this document. The second section is presented in the form of a user guide that explains the input and execution of the code. The third section provides many examples. Successful application of the SAGE code in both two and three dimensions for the solution of various flow problems has proven the code to be robust, portable, and simple to use. Although the basic formulation follows the method of Nakahashi and Deiwert, many modifications have been made to facilitate the use of the self-adaptive grid method for complex grid structures. Modifications to the method and the simple but extensive input options make this a flexible and user-friendly code. The SAGE code can accommodate two-dimensional and three-dimensional, finite-difference and finite-volume, single grid, and zonal-matching multiple grid flow problems.

  14. Multirate parallel distributed compensation of a cluster in wireless sensor and actor networks

    NASA Astrophysics Data System (ADS)

    Yang, Chun-xi; Huang, Ling-yun; Zhang, Hao; Hua, Wang

    2016-01-01

    The stabilisation problem for one of the clusters with bounded multiple random time delays and packet dropouts in wireless sensor and actor networks is investigated in this paper. A new multirate switching model is constructed to describe the feature of this single input multiple output linear system. According to the difficulty of controller design under multi-constraints in multirate switching model, this model can be converted to a Takagi-Sugeno fuzzy model. By designing a multirate parallel distributed compensation, a sufficient condition is established to ensure this closed-loop fuzzy control system to be globally exponentially stable. The solution of the multirate parallel distributed compensation gains can be obtained by solving an auxiliary convex optimisation problem. Finally, two numerical examples are given to show, compared with solving switching controller, multirate parallel distributed compensation can be obtained easily. Furthermore, it has stronger robust stability than arbitrary switching controller and single-rate parallel distributed compensation under the same conditions.

  15. Classification of arrhythmia using hybrid networks.

    PubMed

    Haseena, Hassan H; Joseph, Paul K; Mathew, Abraham T

    2011-12-01

    Reliable detection of arrhythmias based on digital processing of Electrocardiogram (ECG) signals is vital in providing suitable and timely treatment to a cardiac patient. Due to corruption of ECG signals with multiple frequency noise and presence of multiple arrhythmic events in a cardiac rhythm, computerized interpretation of abnormal ECG rhythms is a challenging task. This paper focuses a Fuzzy C- Mean (FCM) clustered Probabilistic Neural Network (PNN) and Multi Layered Feed Forward Network (MLFFN) for the discrimination of eight types of ECG beats. Parameters such as fourth order Auto Regressive (AR) coefficients along with Spectral Entropy (SE) are extracted from each ECG beat and feature reduction has been carried out using FCM clustering. The cluster centers form the input of neural network classifiers. The extensive analysis of Massachusetts Institute of Technology- Beth Israel Hospital (MIT-BIH) arrhythmia database shows that FCM clustered PNNs is superior in cardiac arrhythmia classification than FCM clustered MLFFN with an overall accuracy of 99.05%, 97.14%, respectively.

  16. Optimization of laser butt welding parameters with multiple performance characteristics

    NASA Astrophysics Data System (ADS)

    Sathiya, P.; Abdul Jaleel, M. Y.; Katherasan, D.; Shanmugarajan, B.

    2011-04-01

    This paper presents a study carried out on 3.5 kW cooled slab laser welding of 904 L super austenitic stainless steel. The joints have butts welded with different shielding gases, namely argon, helium and nitrogen, at a constant flow rate. Super austenitic stainless steel (SASS) normally contains high amount of Mo, Cr, Ni, N and Mn. The mechanical properties are controlled to obtain good welded joints. The quality of the joint is evaluated by studying the features of weld bead geometry, such as bead width (BW) and depth of penetration (DOP). In this paper, the tensile strength and bead profiles (BW and DOP) of laser welded butt joints made of AISI 904 L SASS are investigated. The Taguchi approach is used as a statistical design of experiment (DOE) technique for optimizing the selected welding parameters. Grey relational analysis and the desirability approach are applied to optimize the input parameters by considering multiple output variables simultaneously. Confirmation experiments have also been conducted for both of the analyses to validate the optimized parameters.

  17. Portable multiplicity counter

    DOEpatents

    Newell, Matthew R [Los Alamos, NM; Jones, David Carl [Los Alamos, NM

    2009-09-01

    A portable multiplicity counter has signal input circuitry, processing circuitry and a user/computer interface disposed in a housing. The processing circuitry, which can comprise a microcontroller integrated circuit operably coupled to shift register circuitry implemented in a field programmable gate array, is configured to be operable via the user/computer interface to count input signal pluses receivable at said signal input circuitry and record time correlations thereof in a total counting mode, coincidence counting mode and/or a multiplicity counting mode. The user/computer interface can be for example an LCD display/keypad and/or a USB interface. The counter can include a battery pack for powering the counter and low/high voltage power supplies for biasing external detectors so that the counter can be configured as a hand-held device for counting neutron events.

  18. Graphene-assisted multiple-input high-base optical computing

    PubMed Central

    Hu, Xiao; Wang, Andong; Zeng, Mengqi; Long, Yun; Zhu, Long; Fu, Lei; Wang, Jian

    2016-01-01

    We propose graphene-assisted multiple-input high-base optical computing. We fabricate a nonlinear optical device based on a fiber pigtail cross-section coated with a single-layer graphene grown by chemical vapor deposition (CVD) method. An approach to implementing modulo 4 operations of three-input hybrid addition and subtraction of quaternary base numbers in the optical domain using multiple non-degenerate four-wave mixing (FWM) processes in graphene coated optical fiber device and (differential) quadrature phase-shift keying ((D)QPSK) signals is presented. We demonstrate 10-Gbaud modulo 4 operations of three-input quaternary hybrid addition and subtraction (A + B − C, A + C − B, B + C − A) in the experiment. The measured optical signal-to-noise ratio (OSNR) penalties for modulo 4 operations of three-input quaternary hybrid addition and subtraction (A + B − C, A + C − B, B + C − A) are measured to be less than 7 dB at a bit-error rate (BER) of 2 × 10−3. The BER performance as a function of the relative time offset between three signals (signal offset) is also evaluated showing favorable performance. PMID:27604866

  19. An Efficient Method for Verifying Gyrokinetic Microstability Codes

    NASA Astrophysics Data System (ADS)

    Bravenec, R.; Candy, J.; Dorland, W.; Holland, C.

    2009-11-01

    Benchmarks for gyrokinetic microstability codes can be developed through successful ``apples-to-apples'' comparisons among them. Unlike previous efforts, we perform the comparisons for actual discharges, rendering the verification efforts relevant to existing experiments and future devices (ITER). The process requires i) assembling the experimental analyses at multiple times, radii, discharges, and devices, ii) creating the input files ensuring that the input parameters are faithfully translated code-to-code, iii) running the codes, and iv) comparing the results, all in an organized fashion. The purpose of this work is to automate this process as much as possible: At present, a python routine is used to generate and organize GYRO input files from TRANSP or ONETWO analyses. Another routine translates the GYRO input files into GS2 input files. (Translation software for other codes has not yet been written.) Other python codes submit the multiple GYRO and GS2 jobs, organize the results, and collect them into a table suitable for plotting. (These separate python routines could easily be consolidated.) An example of the process -- a linear comparison between GYRO and GS2 for a DIII-D discharge at multiple radii -- will be presented.

  20. Going beyond Input Quantity: "Wh"-Questions Matter for Toddlers' Language and Cognitive Development

    ERIC Educational Resources Information Center

    Rowe, Meredith L.; Leech, Kathryn A.; Cabrera, Natasha

    2017-01-01

    There are clear associations between the overall quantity of input children are exposed to and their vocabulary acquisition. However, by uncovering specific features of the input that matter, we can better understand the mechanisms involved in vocabulary learning. We examine whether exposure to "wh"-questions, a challenging quality of…

  1. Effects of Textual Enhancement and Input Enrichment on L2 Development

    ERIC Educational Resources Information Center

    Rassaei, Ehsan

    2015-01-01

    Research on second language (L2) acquisition has recently sought to include formal instruction into second and foreign language classrooms in a more unobtrusive and implicit manner. Textual enhancement and input enrichment are two techniques which are aimed at drawing learners' attention to specific linguistic features in input and at the same…

  2. Tracing the source of numerical climate model uncertainties in precipitation simulations using a feature-oriented statistical model

    NASA Astrophysics Data System (ADS)

    Xu, Y.; Jones, A. D.; Rhoades, A.

    2017-12-01

    Precipitation is a key component in hydrologic cycles, and changing precipitation regimes contribute to more intense and frequent drought and flood events around the world. Numerical climate modeling is a powerful tool to study climatology and to predict future changes. Despite the continuous improvement in numerical models, long-term precipitation prediction remains a challenge especially at regional scales. To improve numerical simulations of precipitation, it is important to find out where the uncertainty in precipitation simulations comes from. There are two types of uncertainty in numerical model predictions. One is related to uncertainty in the input data, such as model's boundary and initial conditions. These uncertainties would propagate to the final model outcomes even if the numerical model has exactly replicated the true world. But a numerical model cannot exactly replicate the true world. Therefore, the other type of model uncertainty is related the errors in the model physics, such as the parameterization of sub-grid scale processes, i.e., given precise input conditions, how much error could be generated by the in-precise model. Here, we build two statistical models based on a neural network algorithm to predict long-term variation of precipitation over California: one uses "true world" information derived from observations, and the other uses "modeled world" information using model inputs and outputs from the North America Coordinated Regional Downscaling Project (NA CORDEX). We derive multiple climate feature metrics as the predictors for the statistical model to represent the impact of global climate on local hydrology, and include topography as a predictor to represent the local control. We first compare the predictors between the true world and the modeled world to determine the errors contained in the input data. By perturbing the predictors in the statistical model, we estimate how much uncertainty in the model's final outcomes is accounted for by each predictor. By comparing the statistical model derived from true world information and modeled world information, we assess the errors lying in the physics of the numerical models. This work provides a unique insight to assess the performance of numerical climate models, and can be used to guide improvement of precipitation prediction.

  3. Enhancing Access to Drought Information Using the CUAHSI Hydrologic Information System

    NASA Astrophysics Data System (ADS)

    Schreuders, K. A.; Tarboton, D. G.; Horsburgh, J. S.; Sen Gupta, A.; Reeder, S.

    2011-12-01

    The National Drought Information System (NIDIS) Upper Colorado River Basin pilot study is investigating and establishing capabilities for better dissemination of drought information for early warning and management. As part of this study we are using and extending functionality from the Consortium of Universities for the Advancement of Hydrologic Science, Inc. (CUAHSI) Hydrologic Information System (HIS) to provide better access to drought-related data in the Upper Colorado River Basin. The CUAHSI HIS is a federated system for sharing hydrologic data. It is comprised of multiple data servers, referred to as HydroServers, that publish data in a standard XML format called Water Markup Language (WaterML), using web services referred to as WaterOneFlow web services. HydroServers can also publish geospatial data using Open Geospatial Consortium (OGC) web map, feature and coverage services and are capable of hosting web and map applications that combine geospatial datasets with observational data served via web services. HIS also includes a centralized metadata catalog that indexes data from registered HydroServers and a data access client referred to as HydroDesktop. For NIDIS, we have established a HydroServer to publish drought index values as well as the input data used in drought index calculations. Primary input data required for drought index calculation include streamflow, precipitation, reservoir storages, snow water equivalent, and soil moisture. We have developed procedures to redistribute the input data to the time and space scales chosen for drought index calculation, namely half monthly time intervals for HUC 10 subwatersheds. The spatial redistribution approaches used for each input parameter are dependent on the spatial linkages for that parameter, i.e., the redistribution procedure for streamflow is dependent on the upstream/downstream connectivity of the stream network, and the precipitation redistribution procedure is dependent on elevation to account for orographic effects. A set of drought indices are then calculated from the redistributed data. We have created automated data and metadata harvesters that periodically scan and harvest new data from each of the input databases, and calculates extensions to the resulting derived data sets, ensuring that the data available on the drought server is kept up to date. This paper will describe this system, showing how it facilitates the integration of data from multiple sources to inform the planning and management of water resources during drought. The system may be accessed at http://drought.usu.edu.

  4. On the use of feature selection to improve the detection of sea oil spills in SAR images

    NASA Astrophysics Data System (ADS)

    Mera, David; Bolon-Canedo, Veronica; Cotos, J. M.; Alonso-Betanzos, Amparo

    2017-03-01

    Fast and effective oil spill detection systems are crucial to ensure a proper response to environmental emergencies caused by hydrocarbon pollution on the ocean's surface. Typically, these systems uncover not only oil spills, but also a high number of look-alikes. The feature extraction is a critical and computationally intensive phase where each detected dark spot is independently examined. Traditionally, detection systems use an arbitrary set of features to discriminate between oil spills and look-alikes phenomena. However, Feature Selection (FS) methods based on Machine Learning (ML) have proved to be very useful in real domains for enhancing the generalization capabilities of the classifiers, while discarding the existing irrelevant features. In this work, we present a generic and systematic approach, based on FS methods, for choosing a concise and relevant set of features to improve the oil spill detection systems. We have compared five FS methods: Correlation-based feature selection (CFS), Consistency-based filter, Information Gain, ReliefF and Recursive Feature Elimination for Support Vector Machine (SVM-RFE). They were applied on a 141-input vector composed of features from a collection of outstanding studies. Selected features were validated via a Support Vector Machine (SVM) classifier and the results were compared with previous works. Test experiments revealed that the classifier trained with the 6-input feature vector proposed by SVM-RFE achieved the best accuracy and Cohen's kappa coefficient (87.1% and 74.06% respectively). This is a smaller feature combination with similar or even better classification accuracy than previous works. The presented finding allows to speed up the feature extraction phase without reducing the classifier accuracy. Experiments also confirmed the significance of the geometrical features since 75.0% of the different features selected by the applied FS methods as well as 66.67% of the proposed 6-input feature vector belong to this category.

  5. Synaptic State Matching: A Dynamical Architecture for Predictive Internal Representation and Feature Detection

    PubMed Central

    Tavazoie, Saeed

    2013-01-01

    Here we explore the possibility that a core function of sensory cortex is the generation of an internal simulation of sensory environment in real-time. A logical elaboration of this idea leads to a dynamical neural architecture that oscillates between two fundamental network states, one driven by external input, and the other by recurrent synaptic drive in the absence of sensory input. Synaptic strength is modified by a proposed synaptic state matching (SSM) process that ensures equivalence of spike statistics between the two network states. Remarkably, SSM, operating locally at individual synapses, generates accurate and stable network-level predictive internal representations, enabling pattern completion and unsupervised feature detection from noisy sensory input. SSM is a biologically plausible substrate for learning and memory because it brings together sequence learning, feature detection, synaptic homeostasis, and network oscillations under a single unifying computational framework. PMID:23991161

  6. Low voltage to high voltage level shifter and related methods

    NASA Technical Reports Server (NTRS)

    Mentze, Erik J. (Inventor); Buck, Kevin M. (Inventor); Hess, Herbert L. (Inventor); Cox, David F. (Inventor)

    2006-01-01

    A shifter circuit comprises a high and low voltage buffer stages and an output buffer stage. The high voltage buffer stage comprises multiple transistors arranged in a transistor stack having a plurality of intermediate nodes connecting individual transistors along the stack. The transistor stack is connected between a voltage level being shifted to and an input voltage. An inverter of this stage comprises multiple inputs and an output. Inverter inputs are connected to a respective intermediate node of the transistor stack. The low voltage buffer stage has an input connected to the input voltage and an output, and is operably connected to the high voltage buffer stage. The low voltage buffer stage is connected between a voltage level being shifted away from and a lower voltage. The output buffer stage is driven by the outputs of the high voltage buffer stage inverter and the low voltage buffer stage.

  7. Method to assess the temporal persistence of potential biometric features: Application to oculomotor, gait, face and brain structure databases

    PubMed Central

    Nixon, Mark S.; Komogortsev, Oleg V.

    2017-01-01

    We introduce the intraclass correlation coefficient (ICC) to the biometric community as an index of the temporal persistence, or stability, of a single biometric feature. It requires, as input, a feature on an interval or ratio scale, and which is reasonably normally distributed, and it can only be calculated if each subject is tested on 2 or more occasions. For a biometric system, with multiple features available for selection, the ICC can be used to measure the relative stability of each feature. We show, for 14 distinct data sets (1 synthetic, 8 eye-movement-related, 2 gait-related, and 2 face-recognition-related, and one brain-structure-related), that selecting the most stable features, based on the ICC, resulted in the best biometric performance generally. Analyses based on using only the most stable features produced superior Rank-1-Identification Rate (Rank-1-IR) performance in 12 of 14 databases (p = 0.0065, one-tailed), when compared to other sets of features, including the set of all features. For Equal Error Rate (EER), using a subset of only high-ICC features also produced superior performance in 12 of 14 databases (p = 0. 0065, one-tailed). In general, then, for our databases, prescreening potential biometric features, and choosing only highly reliable features yields better performance than choosing lower ICC features or than choosing all features combined. We also determined that, as the ICC of a group of features increases, the median of the genuine similarity score distribution increases and the spread of this distribution decreases. There was no statistically significant similar relationships for the impostor distributions. We believe that the ICC will find many uses in biometric research. In case of the eye movement-driven biometrics, the use of reliable features, as measured by ICC, allowed to us achieve the authentication performance with EER = 2.01%, which was not possible before. PMID:28575030

  8. Method to assess the temporal persistence of potential biometric features: Application to oculomotor, gait, face and brain structure databases.

    PubMed

    Friedman, Lee; Nixon, Mark S; Komogortsev, Oleg V

    2017-01-01

    We introduce the intraclass correlation coefficient (ICC) to the biometric community as an index of the temporal persistence, or stability, of a single biometric feature. It requires, as input, a feature on an interval or ratio scale, and which is reasonably normally distributed, and it can only be calculated if each subject is tested on 2 or more occasions. For a biometric system, with multiple features available for selection, the ICC can be used to measure the relative stability of each feature. We show, for 14 distinct data sets (1 synthetic, 8 eye-movement-related, 2 gait-related, and 2 face-recognition-related, and one brain-structure-related), that selecting the most stable features, based on the ICC, resulted in the best biometric performance generally. Analyses based on using only the most stable features produced superior Rank-1-Identification Rate (Rank-1-IR) performance in 12 of 14 databases (p = 0.0065, one-tailed), when compared to other sets of features, including the set of all features. For Equal Error Rate (EER), using a subset of only high-ICC features also produced superior performance in 12 of 14 databases (p = 0. 0065, one-tailed). In general, then, for our databases, prescreening potential biometric features, and choosing only highly reliable features yields better performance than choosing lower ICC features or than choosing all features combined. We also determined that, as the ICC of a group of features increases, the median of the genuine similarity score distribution increases and the spread of this distribution decreases. There was no statistically significant similar relationships for the impostor distributions. We believe that the ICC will find many uses in biometric research. In case of the eye movement-driven biometrics, the use of reliable features, as measured by ICC, allowed to us achieve the authentication performance with EER = 2.01%, which was not possible before.

  9. Invariant-feature-based adaptive automatic target recognition in obscured 3D point clouds

    NASA Astrophysics Data System (ADS)

    Khuon, Timothy; Kershner, Charles; Mattei, Enrico; Alverio, Arnel; Rand, Robert

    2014-06-01

    Target recognition and classification in a 3D point cloud is a non-trivial process due to the nature of the data collected from a sensor system. The signal can be corrupted by noise from the environment, electronic system, A/D converter, etc. Therefore, an adaptive system with a desired tolerance is required to perform classification and recognition optimally. The feature-based pattern recognition algorithm architecture as described below is particularly devised for solving a single-sensor classification non-parametrically. Feature set is extracted from an input point cloud, normalized, and classifier a neural network classifier. For instance, automatic target recognition in an urban area would require different feature sets from one in a dense foliage area. The figure above (see manuscript) illustrates the architecture of the feature based adaptive signature extraction of 3D point cloud including LIDAR, RADAR, and electro-optical data. This network takes a 3D cluster and classifies it into a specific class. The algorithm is a supervised and adaptive classifier with two modes: the training mode and the performing mode. For the training mode, a number of novel patterns are selected from actual or artificial data. A particular 3D cluster is input to the network as shown above for the decision class output. The network consists of three sequential functional modules. The first module is for feature extraction that extracts the input cluster into a set of singular value features or feature vector. Then the feature vector is input into the feature normalization module to normalize and balance it before being fed to the neural net classifier for the classification. The neural net can be trained by actual or artificial novel data until each trained output reaches the declared output within the defined tolerance. In case new novel data is added after the neural net has been learned, the training is then resumed until the neural net has incrementally learned with the new novel data. The associative memory capability of the neural net enables the incremental learning. The back propagation algorithm or support vector machine can be utilized for the classification and recognition.

  10. Revised Multi-Node Well (MNW2) Package for MODFLOW Ground-Water Flow Model

    USGS Publications Warehouse

    Konikow, Leonard F.; Hornberger, George Z.; Halford, Keith J.; Hanson, Randall T.; Harbaugh, Arlen W.

    2009-01-01

    Wells that are open to multiple aquifers can provide preferential pathways to flow and solute transport that short-circuit normal fluid flowlines. Representing these features in a regional flow model can produce a more realistic and reliable simulation model. This report describes modifications to the Multi-Node Well (MNW) Package of the U.S. Geological Survey (USGS) three-dimensional ground-water flow model (MODFLOW). The modifications build on a previous version and add several new features, processes, and input and output options. The input structure of the revised MNW (MNW2) is more well-centered than the original verion of MNW (MNW1) and allows the user to easily define hydraulic characteristics of each multi-node well. MNW2 also allows calculations of additional head changes due to partial penetration effects, flow into a borehole through a seepage face, changes in well discharge related to changes in lift for a given pump, and intraborehole flows with a pump intake located at any specified depth within the well. MNW2 also offers an improved capability to simulate nonvertical wells. A new output option allows selected multi-node wells to be designated as 'observation wells' for which changes in selected variables with time will be written to separate output files to facilitate postprocessing. MNW2 is compatible with the MODFLOW-2000 and MODFLOW-2005 versions of MODFLOW and with the version of MODFLOW that includes the Ground-Water Transport process (MODFLOW-GWT).

  11. A fast button surface defects detection method based on convolutional neural network

    NASA Astrophysics Data System (ADS)

    Liu, Lizhe; Cao, Danhua; Wu, Songlin; Wu, Yubin; Wei, Taoran

    2018-01-01

    Considering the complexity of the button surface texture and the variety of buttons and defects, we propose a fast visual method for button surface defect detection, based on convolutional neural network (CNN). CNN has the ability to extract the essential features by training, avoiding designing complex feature operators adapted to different kinds of buttons, textures and defects. Firstly, we obtain the normalized button region and then use HOG-SVM method to identify the front and back side of the button. Finally, a convolutional neural network is developed to recognize the defects. Aiming at detecting the subtle defects, we propose a network structure with multiple feature channels input. To deal with the defects of different scales, we take a strategy of multi-scale image block detection. The experimental results show that our method is valid for a variety of buttons and able to recognize all kinds of defects that have occurred, including dent, crack, stain, hole, wrong paint and uneven. The detection rate exceeds 96%, which is much better than traditional methods based on SVM and methods based on template match. Our method can reach the speed of 5 fps on DSP based smart camera with 600 MHz frequency.

  12. Why do people appear not to extrapolate trajectories during multiple object tracking? A computational investigation

    PubMed Central

    Zhong, Sheng-hua; Ma, Zheng; Wilson, Colin; Liu, Yan; Flombaum, Jonathan I

    2014-01-01

    Intuitively, extrapolating object trajectories should make visual tracking more accurate. This has proven to be true in many contexts that involve tracking a single item. But surprisingly, when tracking multiple identical items in what is known as “multiple object tracking,” observers often appear to ignore direction of motion, relying instead on basic spatial memory. We investigated potential reasons for this behavior through probabilistic models that were endowed with perceptual limitations in the range of typical human observers, including noisy spatial perception. When we compared a model that weights its extrapolations relative to other sources of information about object position, and one that does not extrapolate at all, we found no reliable difference in performance, belying the intuition that extrapolation always benefits tracking. In follow-up experiments we found this to be true for a variety of models that weight observations and predictions in different ways; in some cases we even observed worse performance for models that use extrapolations compared to a model that does not at all. Ultimately, the best performing models either did not extrapolate, or extrapolated very conservatively, relying heavily on observations. These results illustrate the difficulty and attendant hazards of using noisy inputs to extrapolate the trajectories of multiple objects simultaneously in situations with targets and featurally confusable nontargets. PMID:25311300

  13. Right-Brain/Left-Brain Integrated Associative Processor Employing Convertible Multiple-Instruction-Stream Multiple-Data-Stream Elements

    NASA Astrophysics Data System (ADS)

    Hayakawa, Hitoshi; Ogawa, Makoto; Shibata, Tadashi

    2005-04-01

    A very large scale integrated circuit (VLSI) architecture for a multiple-instruction-stream multiple-data-stream (MIMD) associative processor has been proposed. The processor employs an architecture that enables seamless switching from associative operations to arithmetic operations. The MIMD element is convertible to a regular central processing unit (CPU) while maintaining its high performance as an associative processor. Therefore, the MIMD associative processor can perform not only on-chip perception, i.e., searching for the vector most similar to an input vector throughout the on-chip cache memory, but also arithmetic and logic operations similar to those in ordinary CPUs, both simultaneously in parallel processing. Three key technologies have been developed to generate the MIMD element: associative-operation-and-arithmetic-operation switchable calculation units, a versatile register control scheme within the MIMD element for flexible operations, and a short instruction set for minimizing the memory size for program storage. Key circuit blocks were designed and fabricated using 0.18 μm complementary metal-oxide-semiconductor (CMOS) technology. As a result, the full-featured MIMD element is estimated to be 3 mm2, showing the feasibility of an 8-parallel-MIMD-element associative processor in a single chip of 5 mm× 5 mm.

  14. Recursive Feature Extraction in Graphs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2014-08-14

    ReFeX extracts recursive topological features from graph data. The input is a graph as a csv file and the output is a csv file containing feature values for each node in the graph. The features are based on topological counts in the neighborhoods of each nodes, as well as recursive summaries of neighbors' features.

  15. Prediction of beta-turns at over 80% accuracy based on an ensemble of predicted secondary structures and multiple alignments.

    PubMed

    Zheng, Ce; Kurgan, Lukasz

    2008-10-10

    beta-turn is a secondary protein structure type that plays significant role in protein folding, stability, and molecular recognition. To date, several methods for prediction of beta-turns from protein sequences were developed, but they are characterized by relatively poor prediction quality. The novelty of the proposed sequence-based beta-turn predictor stems from the usage of a window based information extracted from four predicted three-state secondary structures, which together with a selected set of position specific scoring matrix (PSSM) values serve as an input to the support vector machine (SVM) predictor. We show that (1) all four predicted secondary structures are useful; (2) the most useful information extracted from the predicted secondary structure includes the structure of the predicted residue, secondary structure content in a window around the predicted residue, and features that indicate whether the predicted residue is inside a secondary structure segment; (3) the PSSM values of Asn, Asp, Gly, Ile, Leu, Met, Pro, and Val were among the top ranked features, which corroborates with recent studies. The Asn, Asp, Gly, and Pro indicate potential beta-turns, while the remaining four amino acids are useful to predict non-beta-turns. Empirical evaluation using three nonredundant datasets shows favorable Q total, Q predicted and MCC values when compared with over a dozen of modern competing methods. Our method is the first to break the 80% Q total barrier and achieves Q total = 80.9%, MCC = 0.47, and Q predicted higher by over 6% when compared with the second best method. We use feature selection to reduce the dimensionality of the feature vector used as the input for the proposed prediction method. The applied feature set is smaller by 86, 62 and 37% when compared with the second and two third-best (with respect to MCC) competing methods, respectively. Experiments show that the proposed method constitutes an improvement over the competing prediction methods. The proposed prediction model can better discriminate between beta-turns and non-beta-turns due to obtaining lower numbers of false positive predictions. The prediction model and datasets are freely available at http://biomine.ece.ualberta.ca/BTNpred/BTNpred.html.

  16. Prediction of beta-turns at over 80% accuracy based on an ensemble of predicted secondary structures and multiple alignments

    PubMed Central

    Zheng, Ce; Kurgan, Lukasz

    2008-01-01

    Background β-turn is a secondary protein structure type that plays significant role in protein folding, stability, and molecular recognition. To date, several methods for prediction of β-turns from protein sequences were developed, but they are characterized by relatively poor prediction quality. The novelty of the proposed sequence-based β-turn predictor stems from the usage of a window based information extracted from four predicted three-state secondary structures, which together with a selected set of position specific scoring matrix (PSSM) values serve as an input to the support vector machine (SVM) predictor. Results We show that (1) all four predicted secondary structures are useful; (2) the most useful information extracted from the predicted secondary structure includes the structure of the predicted residue, secondary structure content in a window around the predicted residue, and features that indicate whether the predicted residue is inside a secondary structure segment; (3) the PSSM values of Asn, Asp, Gly, Ile, Leu, Met, Pro, and Val were among the top ranked features, which corroborates with recent studies. The Asn, Asp, Gly, and Pro indicate potential β-turns, while the remaining four amino acids are useful to predict non-β-turns. Empirical evaluation using three nonredundant datasets shows favorable Qtotal, Qpredicted and MCC values when compared with over a dozen of modern competing methods. Our method is the first to break the 80% Qtotal barrier and achieves Qtotal = 80.9%, MCC = 0.47, and Qpredicted higher by over 6% when compared with the second best method. We use feature selection to reduce the dimensionality of the feature vector used as the input for the proposed prediction method. The applied feature set is smaller by 86, 62 and 37% when compared with the second and two third-best (with respect to MCC) competing methods, respectively. Conclusion Experiments show that the proposed method constitutes an improvement over the competing prediction methods. The proposed prediction model can better discriminate between β-turns and non-β-turns due to obtaining lower numbers of false positive predictions. The prediction model and datasets are freely available at . PMID:18847492

  17. Fukunaga-Koontz feature transformation for statistical structural damage detection and hierarchical neuro-fuzzy damage localisation

    NASA Astrophysics Data System (ADS)

    Hoell, Simon; Omenzetter, Piotr

    2017-07-01

    Considering jointly damage sensitive features (DSFs) of signals recorded by multiple sensors, applying advanced transformations to these DSFs and assessing systematically their contribution to damage detectability and localisation can significantly enhance the performance of structural health monitoring systems. This philosophy is explored here for partial autocorrelation coefficients (PACCs) of acceleration responses. They are interrogated with the help of the linear discriminant analysis based on the Fukunaga-Koontz transformation using datasets of the healthy and selected reference damage states. Then, a simple but efficient fast forward selection procedure is applied to rank the DSF components with respect to statistical distance measures specialised for either damage detection or localisation. For the damage detection task, the optimal feature subsets are identified based on the statistical hypothesis testing. For damage localisation, a hierarchical neuro-fuzzy tool is developed that uses the DSF ranking to establish its own optimal architecture. The proposed approaches are evaluated experimentally on data from non-destructively simulated damage in a laboratory scale wind turbine blade. The results support our claim of being able to enhance damage detectability and localisation performance by transforming and optimally selecting DSFs. It is demonstrated that the optimally selected PACCs from multiple sensors or their Fukunaga-Koontz transformed versions can not only improve the detectability of damage via statistical hypothesis testing but also increase the accuracy of damage localisation when used as inputs into a hierarchical neuro-fuzzy network. Furthermore, the computational effort of employing these advanced soft computing models for damage localisation can be significantly reduced by using transformed DSFs.

  18. Exploration versus exploitation in space, mind, and society

    PubMed Central

    Hills, Thomas T.; Todd, Peter M.; Lazer, David; Redish, A. David; Couzin, Iain D.

    2015-01-01

    Search is a ubiquitous property of life. Although diverse domains have worked on search problems largely in isolation, recent trends across disciplines indicate that the formal properties of these problems share similar structures and, often, similar solutions. Moreover, internal search (e.g., memory search) shows similar characteristics to external search (e.g., spatial foraging), including shared neural mechanisms consistent with a common evolutionary origin across species. Search problems and their solutions also scale from individuals to societies, underlying and constraining problem solving, memory, information search, and scientific and cultural innovation. In summary, search represents a core feature of cognition, with a vast influence on its evolution and processes across contexts and requiring input from multiple domains to understand its implications and scope. PMID:25487706

  19. Superposing pure quantum states with partial prior information

    NASA Astrophysics Data System (ADS)

    Dogra, Shruti; Thomas, George; Ghosh, Sibasish; Suter, Dieter

    2018-05-01

    The principle of superposition is an intriguing feature of quantum mechanics, which is regularly exploited in many different circumstances. A recent work [M. Oszmaniec et al., Phys. Rev. Lett. 116, 110403 (2016), 10.1103/PhysRevLett.116.110403] shows that the fundamentals of quantum mechanics restrict the process of superimposing two unknown pure states, even though it is possible to superimpose two quantum states with partial prior knowledge. The prior knowledge imposes geometrical constraints on the choice of input states. We discuss an experimentally feasible protocol to superimpose multiple pure states of a d -dimensional quantum system and carry out an explicit experimental realization for two single-qubit pure states with partial prior information on a two-qubit NMR quantum information processor.

  20. A flexible telecom satellite repeater based on microwave photonic technologies

    NASA Astrophysics Data System (ADS)

    Sotom, Michel; Benazet, Benoît; Maignan, Michel

    2017-11-01

    Future telecom satellite based on geo-stationary Earth orbit (GEO) will require advanced payloads in Kaband so as to receive, route and re-transmit hundreds of microwave channels over multiple antenna beams. We report on the proof-of-concept demonstration of a analogue repeater making use of microwave photonic technologies for supporting broadband, transparent, and flexible cross-connectivity. It has microwave input and output sections, and features a photonic core for LO distribution, frequency down-conversion, and cross-connection of RF channels. With benefits such as transparency to RF frequency, infinite RF isolation, mass and volume savings, such a microwave photonic cross-connect would compare favourably with microwave implementations, and based on optical MEMS switches could grow up to large port counts.

  1. Wideband Reconfigurable Harmonically Tuned GaN SSPA for Cognitive Radios

    NASA Technical Reports Server (NTRS)

    Waldstein, Seth W.; Barbosa Kortright, Miguel A.; Simons, Rainee N.

    2017-01-01

    The paper presents the architecture of a wideband reconfigurable harmonically-tuned Gallium Nitrate (GaN) Solid State Power Amplifier (SSPA) for cognitive radios. When interfaced with the physical layer of a cognitive communication system, this amplifier topology offers broadband high efficiency through the use of multiple tuned input/output matching networks. This feature enables the cognitive radio to reconfigure the operating frequency without sacrificing efficiency. This paper additionally presents as a proof-of-concept the design, fabrication, and test results for a GaN inverse class-F type amplifier operating at X-band (8.4 GHz) that achieves a maximum output power of 5.14-W, Power Added Efficiency (PAE) of 38.6, and Drain Efficiency (DE) of 48.9 under continuous wave (CW) operation.

  2. Modeling the Severe Storm on St. Patrick's Day 2015

    NASA Astrophysics Data System (ADS)

    Fok, M. C. H.; Buzulukova, N.; Perez, J. D.; Craven, J.

    2015-12-01

    A severe geomagnetic storm struck the magnetosphere on the St. Patrick's Day of 2015. The Dst index reached a minimum of -223 nT. Dazzling aurora were seen as far south as Oregon and Illinois. To understand the origins of this extreme event, we simulated the dynamics of energetic ions and electrons using the Comprehensive Inner Magnetosphere-Ionosphere (CIMI) model. We reproduced a number of observable signatures, such as extended ionosphere precipitation, multiple-peaks ring current seen by TWINS spacecraft, and relativistic electron enhancement during recovery seen by geosynchronous and Van Allen Probes satellites. We have performed several model runs with different input parameters and model setup to identify the physical processes responsible for the distinct features of this event.

  3. Bilinearity in Spatiotemporal Integration of Synaptic Inputs

    PubMed Central

    Li, Songting; Liu, Nan; Zhang, Xiao-hui; Zhou, Douglas; Cai, David

    2014-01-01

    Neurons process information via integration of synaptic inputs from dendrites. Many experimental results demonstrate dendritic integration could be highly nonlinear, yet few theoretical analyses have been performed to obtain a precise quantitative characterization analytically. Based on asymptotic analysis of a two-compartment passive cable model, given a pair of time-dependent synaptic conductance inputs, we derive a bilinear spatiotemporal dendritic integration rule. The summed somatic potential can be well approximated by the linear summation of the two postsynaptic potentials elicited separately, plus a third additional bilinear term proportional to their product with a proportionality coefficient . The rule is valid for a pair of synaptic inputs of all types, including excitation-inhibition, excitation-excitation, and inhibition-inhibition. In addition, the rule is valid during the whole dendritic integration process for a pair of synaptic inputs with arbitrary input time differences and input locations. The coefficient is demonstrated to be nearly independent of the input strengths but is dependent on input times and input locations. This rule is then verified through simulation of a realistic pyramidal neuron model and in electrophysiological experiments of rat hippocampal CA1 neurons. The rule is further generalized to describe the spatiotemporal dendritic integration of multiple excitatory and inhibitory synaptic inputs. The integration of multiple inputs can be decomposed into the sum of all possible pairwise integration, where each paired integration obeys the bilinear rule. This decomposition leads to a graph representation of dendritic integration, which can be viewed as functionally sparse. PMID:25521832

  4. Implementing Distributed Operations: A Comparison of Two Deep Space Missions

    NASA Technical Reports Server (NTRS)

    Mishkin, Andrew; Larsen, Barbara

    2006-01-01

    Two very different deep space exploration missions--Mars Exploration Rover and Cassini--have made use of distributed operations for their science teams. In the case of MER, the distributed operations capability was implemented only after the prime mission was completed, as the rovers continued to operate well in excess of their expected mission lifetimes; Cassini, designed for a mission of more than ten years, had planned for distributed operations from its inception. The rapid command turnaround timeline of MER, as well as many of the operations features implemented to support it, have proven to be conducive to distributed operations. These features include: a single science team leader during the tactical operations timeline, highly integrated science and engineering teams, processes and file structures designed to permit multiple team members to work in parallel to deliver sequencing products, web-based spacecraft status and planning reports for team-wide access, and near-elimination of paper products from the operations process. Additionally, MER has benefited from the initial co-location of its entire operations team, and from having a single Principal Investigator, while Cassini operations have had to reconcile multiple science teams distributed from before launch. Cassini has faced greater challenges in implementing effective distributed operations. Because extensive early planning is required to capture science opportunities on its tour and because sequence development takes significantly longer than sequence execution, multiple teams are contributing to multiple sequences concurrently. The complexity of integrating inputs from multiple teams is exacerbated by spacecraft operability issues and resource contention among the teams, each of which has their own Principal Investigator. Finally, much of the technology that MER has exploited to facilitate distributed operations was not available when the Cassini ground system was designed, although later adoption of web-based and telecommunication tools has been critical to the success of Cassini operations.

  5. Inhibitory and modulatory inputs to the vocal central pattern generator of a teleost fish

    PubMed Central

    Rosner, Elisabeth; Rohmann, Kevin N.; Bass, Andrew H.

    2018-01-01

    Abstract Vocalization is a behavioral feature that is shared among multiple vertebrate lineages, including fish. The temporal patterning of vocal communication signals is set, in part, by central pattern generators (CPGs). Toadfishes are well‐established models for CPG coding of vocalization at the hindbrain level. The vocal CPG comprises three topographically separate nuclei: pre‐pacemaker, pacemaker, motor. While the connectivity between these nuclei is well understood, their neurochemical profile remains largely unexplored. The highly vocal Gulf toadfish, Opsanus beta, has been the subject of previous behavioral, neuroanatomical and neurophysiological studies. Combining transneuronal neurobiotin‐labeling with immunohistochemistry, we map the distribution of inhibitory neurotransmitters and neuromodulators along with gap junctions in the vocal CPG of this species. Dense GABAergic and glycinergic label is found throughout the CPG, with labeled somata immediately adjacent to or within CPG nuclei, including a distinct subset of pacemaker neurons co‐labeled with neurobiotin and glycine. Neurobiotin‐labeled motor and pacemaker neurons are densely co‐labeled with the gap junction protein connexin 35/36, supporting the hypothesis that transneuronal neurobiotin‐labeling occurs, at least in part, via gap junction coupling. Serotonergic and catecholaminergic label is also robust within the entire vocal CPG, with additional cholinergic label in pacemaker and prepacemaker nuclei. Likely sources of these putative modulatory inputs are neurons within or immediately adjacent to vocal CPG neurons. Together with prior neurophysiological investigations, the results reveal potential mechanisms for generating multiple classes of social context‐dependent vocalizations with widely divergent temporal and spectral properties. PMID:29424431

  6. WEGO 2.0: a web tool for analyzing and plotting GO annotations, 2018 update.

    PubMed

    Ye, Jia; Zhang, Yong; Cui, Huihai; Liu, Jiawei; Wu, Yuqing; Cheng, Yun; Xu, Huixing; Huang, Xingxin; Li, Shengting; Zhou, An; Zhang, Xiuqing; Bolund, Lars; Chen, Qiang; Wang, Jian; Yang, Huanming; Fang, Lin; Shi, Chunmei

    2018-05-18

    WEGO (Web Gene Ontology Annotation Plot), created in 2006, is a simple but useful tool for visualizing, comparing and plotting GO (Gene Ontology) annotation results. Owing largely to the rapid development of high-throughput sequencing and the increasing acceptance of GO, WEGO has benefitted from outstanding performance regarding the number of users and citations in recent years, which motivated us to update to version 2.0. WEGO uses the GO annotation results as input. Based on GO's standardized DAG (Directed Acyclic Graph) structured vocabulary system, the number of genes corresponding to each GO ID is calculated and shown in a graphical format. WEGO 2.0 updates have targeted four aspects, aiming to provide a more efficient and up-to-date approach for comparative genomic analyses. First, the number of input files, previously limited to three, is now unlimited, allowing WEGO to analyze multiple datasets. Also added in this version are the reference datasets of nine model species that can be adopted as baselines in genomic comparative analyses. Furthermore, in the analyzing processes each Chi-square test is carried out for multiple datasets instead of every two samples. At last, WEGO 2.0 provides an additional output graph along with the traditional WEGO histogram, displaying the sorted P-values of GO terms and indicating their significant differences. At the same time, WEGO 2.0 features an entirely new user interface. WEGO is available for free at http://wego.genomics.org.cn.

  7. Scene segmentation of natural images using texture measures and back-propagation

    NASA Technical Reports Server (NTRS)

    Sridhar, Banavar; Phatak, Anil; Chatterji, Gano

    1993-01-01

    Knowledge of the three-dimensional world is essential for many guidance and navigation applications. A sequence of images from an electro-optical sensor can be processed using optical flow algorithms to provide a sparse set of ranges as a function of azimuth and elevation. A natural way to enhance the range map is by interpolation. However, this should be undertaken with care since interpolation assumes continuity of range. The range is continuous in certain parts of the image and can jump at object boundaries. In such situations, the ability to detect homogeneous object regions by scene segmentation can be used to determine regions in the range map that can be enhanced by interpolation. The use of scalar features derived from the spatial gray-level dependence matrix for texture segmentation is explored. Thresholding of histograms of scalar texture features is done for several images to select scalar features which result in a meaningful segmentation of the images. Next, the selected scalar features are used with a neural net to automate the segmentation procedure. Back-propagation is used to train the feed forward neural network. The generalization of the network approach to subsequent images in the sequence is examined. It is shown that the use of multiple scalar features as input to the neural network result in a superior segmentation when compared with a single scalar feature. It is also shown that the scalar features, which are not useful individually, result in a good segmentation when used together. The methodology is applied to both indoor and outdoor images.

  8. Chemical sensors are hybrid-input memristors

    NASA Astrophysics Data System (ADS)

    Sysoev, V. I.; Arkhipov, V. E.; Okotrub, A. V.; Pershin, Y. V.

    2018-04-01

    Memristors are two-terminal electronic devices whose resistance depends on the history of input signal (voltage or current). Here we demonstrate that the chemical gas sensors can be considered as memristors with a generalized (hybrid) input, namely, with the input consisting of the voltage, analyte concentrations and applied temperature. The concept of hybrid-input memristors is demonstrated experimentally using a single-walled carbon nanotubes chemical sensor. It is shown that with respect to the hybrid input, the sensor exhibits some features common with memristors such as the hysteretic input-output characteristics. This different perspective on chemical gas sensors may open new possibilities for smart sensor applications.

  9. CTF User's Manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Avramova, Maria; Blyth, Taylor S.; Salko, Robert K.

    This document describes how to make a CTF input deck. A CTF input deck is organized into Card Groups and Cards. A Card Group is a collection of Cards. A Card is defined as a line of input. Each Card may contain multiple data. A Card is terminated by making a new line.

  10. Decision Aids for Multiple-Decision Disease Management as Affected by Weather Input Errors

    USDA-ARS?s Scientific Manuscript database

    Many disease management decision support systems (DSS) rely, exclusively or in part, on weather inputs to calculate an indicator for disease hazard. Error in the weather inputs, typically due to forecasting, interpolation or estimation from off-site sources, may affect model calculations and manage...

  11. Multiple-input multiple-output causal strategies for gene selection.

    PubMed

    Bontempi, Gianluca; Haibe-Kains, Benjamin; Desmedt, Christine; Sotiriou, Christos; Quackenbush, John

    2011-11-25

    Traditional strategies for selecting variables in high dimensional classification problems aim to find sets of maximally relevant variables able to explain the target variations. If these techniques may be effective in generalization accuracy they often do not reveal direct causes. The latter is essentially related to the fact that high correlation (or relevance) does not imply causation. In this study, we show how to efficiently incorporate causal information into gene selection by moving from a single-input single-output to a multiple-input multiple-output setting. We show in synthetic case study that a better prioritization of causal variables can be obtained by considering a relevance score which incorporates a causal term. In addition we show, in a meta-analysis study of six publicly available breast cancer microarray datasets, that the improvement occurs also in terms of accuracy. The biological interpretation of the results confirms the potential of a causal approach to gene selection. Integrating causal information into gene selection algorithms is effective both in terms of prediction accuracy and biological interpretation.

  12. Parameters Selection for Bivariate Multiscale Entropy Analysis of Postural Fluctuations in Fallers and Non-Fallers Older Adults.

    PubMed

    Ramdani, Sofiane; Bonnet, Vincent; Tallon, Guillaume; Lagarde, Julien; Bernard, Pierre Louis; Blain, Hubert

    2016-08-01

    Entropy measures are often used to quantify the regularity of postural sway time series. Recent methodological developments provided both multivariate and multiscale approaches allowing the extraction of complexity features from physiological signals; see "Dynamical complexity of human responses: A multivariate data-adaptive framework," in Bulletin of Polish Academy of Science and Technology, vol. 60, p. 433, 2012. The resulting entropy measures are good candidates for the analysis of bivariate postural sway signals exhibiting nonstationarity and multiscale properties. These methods are dependant on several input parameters such as embedding parameters. Using two data sets collected from institutionalized frail older adults, we numerically investigate the behavior of a recent multivariate and multiscale entropy estimator; see "Multivariate multiscale entropy: A tool for complexity analysis of multichannel data," Physics Review E, vol. 84, p. 061918, 2011. We propose criteria for the selection of the input parameters. Using these optimal parameters, we statistically compare the multivariate and multiscale entropy values of postural sway data of non-faller subjects to those of fallers. These two groups are discriminated by the resulting measures over multiple time scales. We also demonstrate that the typical parameter settings proposed in the literature lead to entropy measures that do not distinguish the two groups. This last result confirms the importance of the selection of appropriate input parameters.

  13. Modified-hybrid optical neural network filter for multiple object recognition within cluttered scenes

    NASA Astrophysics Data System (ADS)

    Kypraios, Ioannis; Young, Rupert C. D.; Chatwin, Chris R.

    2009-08-01

    Motivated by the non-linear interpolation and generalization abilities of the hybrid optical neural network filter between the reference and non-reference images of the true-class object we designed the modifiedhybrid optical neural network filter. We applied an optical mask to the hybrid optical neural network's filter input. The mask was built with the constant weight connections of a randomly chosen image included in the training set. The resulted design of the modified-hybrid optical neural network filter is optimized for performing best in cluttered scenes of the true-class object. Due to the shift invariance properties inherited by its correlator unit the filter can accommodate multiple objects of the same class to be detected within an input cluttered image. Additionally, the architecture of the neural network unit of the general hybrid optical neural network filter allows the recognition of multiple objects of different classes within the input cluttered image by modifying the output layer of the unit. We test the modified-hybrid optical neural network filter for multiple objects of the same and of different classes' recognition within cluttered input images and video sequences of cluttered scenes. The filter is shown to exhibit with a single pass over the input data simultaneously out-of-plane rotation, shift invariance and good clutter tolerance. It is able to successfully detect and classify correctly the true-class objects within background clutter for which there has been no previous training.

  14. Method of generating features optimal to a dataset and classifier

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bruillard, Paul J.; Gosink, Luke J.; Jarman, Kenneth D.

    A method of generating features optimal to a particular dataset and classifier is disclosed. A dataset of messages is inputted and a classifier is selected. An algebra of features is encoded. Computable features that are capable of describing the dataset from the algebra of features are selected. Irredundant features that are optimal for the classifier and the dataset are selected.

  15. Auditory pathways: anatomy and physiology.

    PubMed

    Pickles, James O

    2015-01-01

    This chapter outlines the anatomy and physiology of the auditory pathways. After a brief analysis of the external, middle ears, and cochlea, the responses of auditory nerve fibers are described. The central nervous system is analyzed in more detail. A scheme is provided to help understand the complex and multiple auditory pathways running through the brainstem. The multiple pathways are based on the need to preserve accurate timing while extracting complex spectral patterns in the auditory input. The auditory nerve fibers branch to give two pathways, a ventral sound-localizing stream, and a dorsal mainly pattern recognition stream, which innervate the different divisions of the cochlear nucleus. The outputs of the two streams, with their two types of analysis, are progressively combined in the inferior colliculus and onwards, to produce the representation of what can be called the "auditory objects" in the external world. The progressive extraction of critical features in the auditory stimulus in the different levels of the central auditory system, from cochlear nucleus to auditory cortex, is described. In addition, the auditory centrifugal system, running from cortex in multiple stages to the organ of Corti of the cochlea, is described. © 2015 Elsevier B.V. All rights reserved.

  16. The Device Centric Communication System for 5G Networks

    NASA Astrophysics Data System (ADS)

    Biswash, S. K.; Jayakody, D. N. K.

    2017-01-01

    The Fifth Generation Communication (5G) networks have several functional features such as: Massive Multiple Input and Multiple Output (MIMO), Device centric data and voice support, Smarter-device communications, etc. The objective for 5G networks is to gain the 1000x more throughput, 10x spectral efficiency, 100 x more energy efficiency than existing technologies. The 5G system will provide the balance between the Quality of Experience (QoE) and the Quality of Service (QoS), without compromising the user benefit. The data rate has been the key metric for wireless QoS; QoE deals with the delay and throughput. In order to realize a balance between the QoS and QoE, we propose a cellular Device centric communication methodology for the overlapping network coverage area in the 5G communication system. The multiple beacon signals mobile tower refers to an overlapping network area, and a user must be forwarded to the next location area. To resolve this issue, we suggest the user centric methodology (without Base Station interface) to handover the device in the next area, until the users finalize the communication. The proposed method will reduce the signalling cost and overheads for the communication.

  17. Joint multiple fully connected convolutional neural network with extreme learning machine for hepatocellular carcinoma nuclei grading.

    PubMed

    Li, Siqi; Jiang, Huiyan; Pang, Wenbo

    2017-05-01

    Accurate cell grading of cancerous tissue pathological image is of great importance in medical diagnosis and treatment. This paper proposes a joint multiple fully connected convolutional neural network with extreme learning machine (MFC-CNN-ELM) architecture for hepatocellular carcinoma (HCC) nuclei grading. First, in preprocessing stage, each grayscale image patch with the fixed size is obtained using center-proliferation segmentation (CPS) method and the corresponding labels are marked under the guidance of three pathologists. Next, a multiple fully connected convolutional neural network (MFC-CNN) is designed to extract the multi-form feature vectors of each input image automatically, which considers multi-scale contextual information of deep layer maps sufficiently. After that, a convolutional neural network extreme learning machine (CNN-ELM) model is proposed to grade HCC nuclei. Finally, a back propagation (BP) algorithm, which contains a new up-sample method, is utilized to train MFC-CNN-ELM architecture. The experiment comparison results demonstrate that our proposed MFC-CNN-ELM has superior performance compared with related works for HCC nuclei grading. Meanwhile, external validation using ICPR 2014 HEp-2 cell dataset shows the good generalization of our MFC-CNN-ELM architecture. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. INTERSPIA: a web application for exploring the dynamics of protein-protein interactions among multiple species.

    PubMed

    Kwon, Daehong; Lee, Daehwan; Kim, Juyeon; Lee, Jongin; Sim, Mikang; Kim, Jaebum

    2018-05-09

    Proteins perform biological functions through cascading interactions with each other by forming protein complexes. As a result, interactions among proteins, called protein-protein interactions (PPIs) are not completely free from selection constraint during evolution. Therefore, the identification and analysis of PPI changes during evolution can give us new insight into the evolution of functions. Although many algorithms, databases and websites have been developed to help the study of PPIs, most of them are limited to visualize the structure and features of PPIs in a chosen single species with limited functions in the visualization perspective. This leads to difficulties in the identification of different patterns of PPIs in different species and their functional consequences. To resolve these issues, we developed a web application, called INTER-Species Protein Interaction Analysis (INTERSPIA). Given a set of proteins of user's interest, INTERSPIA first discovers additional proteins that are functionally associated with the input proteins and searches for different patterns of PPIs in multiple species through a server-side pipeline, and second visualizes the dynamics of PPIs in multiple species using an easy-to-use web interface. INTERSPIA is freely available at http://bioinfo.konkuk.ac.kr/INTERSPIA/.

  19. Microprocessor-controlled, wide-range streak camera

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Amy E. Lewis, Craig Hollabaugh

    Bechtel Nevada/NSTec recently announced deployment of their fifth generation streak camera. This camera incorporates many advanced features beyond those currently available for streak cameras. The arc-resistant driver includes a trigger lockout mechanism, actively monitors input trigger levels, and incorporates a high-voltage fault interrupter for user safety and tube protection. The camera is completely modular and may deflect over a variable full-sweep time of 15 nanoseconds to 500 microseconds. The camera design is compatible with both large- and small-format commercial tubes from several vendors. The embedded microprocessor offers Ethernet connectivity, and XML [extensible markup language]-based configuration management with non-volatile parameter storagemore » using flash-based storage media. The camera’s user interface is platform-independent (Microsoft Windows, Unix, Linux, Macintosh OSX) and is accessible using an AJAX [asynchronous Javascript and XML]-equipped modem browser, such as Internet Explorer 6, Firefox, or Safari. User interface operation requires no installation of client software or browser plug-in technology. Automation software can also access the camera configuration and control using HTTP [hypertext transfer protocol]. The software architecture supports multiple-simultaneous clients, multiple cameras, and multiple module access with a standard browser. The entire user interface can be customized.« less

  20. Microprocessor-controlled wide-range streak camera

    NASA Astrophysics Data System (ADS)

    Lewis, Amy E.; Hollabaugh, Craig

    2006-08-01

    Bechtel Nevada/NSTec recently announced deployment of their fifth generation streak camera. This camera incorporates many advanced features beyond those currently available for streak cameras. The arc-resistant driver includes a trigger lockout mechanism, actively monitors input trigger levels, and incorporates a high-voltage fault interrupter for user safety and tube protection. The camera is completely modular and may deflect over a variable full-sweep time of 15 nanoseconds to 500 microseconds. The camera design is compatible with both large- and small-format commercial tubes from several vendors. The embedded microprocessor offers Ethernet connectivity, and XML [extensible markup language]-based configuration management with non-volatile parameter storage using flash-based storage media. The camera's user interface is platform-independent (Microsoft Windows, Unix, Linux, Macintosh OSX) and is accessible using an AJAX [asynchronous Javascript and XML]-equipped modem browser, such as Internet Explorer 6, Firefox, or Safari. User interface operation requires no installation of client software or browser plug-in technology. Automation software can also access the camera configuration and control using HTTP [hypertext transfer protocol]. The software architecture supports multiple-simultaneous clients, multiple cameras, and multiple module access with a standard browser. The entire user interface can be customized.

  1. PFLOTRAN: Reactive Flow & Transport Code for Use on Laptops to Leadership-Class Supercomputers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hammond, Glenn E.; Lichtner, Peter C.; Lu, Chuan

    PFLOTRAN, a next-generation reactive flow and transport code for modeling subsurface processes, has been designed from the ground up to run efficiently on machines ranging from leadership-class supercomputers to laptops. Based on an object-oriented design, the code is easily extensible to incorporate additional processes. It can interface seamlessly with Fortran 9X, C and C++ codes. Domain decomposition parallelism is employed, with the PETSc parallel framework used to manage parallel solvers, data structures and communication. Features of the code include a modular input file, implementation of high-performance I/O using parallel HDF5, ability to perform multiple realization simulations with multiple processors permore » realization in a seamless manner, and multiple modes for multiphase flow and multicomponent geochemical transport. Chemical reactions currently implemented in the code include homogeneous aqueous complexing reactions and heterogeneous mineral precipitation/dissolution, ion exchange, surface complexation and a multirate kinetic sorption model. PFLOTRAN has demonstrated petascale performance using 2{sup 17} processor cores with over 2 billion degrees of freedom. Accomplishments achieved to date include applications to the Hanford 300 Area and modeling CO{sub 2} sequestration in deep geologic formations.« less

  2. Multiple-Input Subject-Specific Modeling of Plasma Glucose Concentration for Feedforward Control.

    PubMed

    Kotz, Kaylee; Cinar, Ali; Mei, Yong; Roggendorf, Amy; Littlejohn, Elizabeth; Quinn, Laurie; Rollins, Derrick K

    2014-11-26

    The ability to accurately develop subject-specific, input causation models, for blood glucose concentration (BGC) for large input sets can have a significant impact on tightening control for insulin dependent diabetes. More specifically, for Type 1 diabetics (T1Ds), it can lead to an effective artificial pancreas (i.e., an automatic control system that delivers exogenous insulin) under extreme changes in critical disturbances. These disturbances include food consumption, activity variations, and physiological stress changes. Thus, this paper presents a free-living, outpatient, multiple-input, modeling method for BGC with strong causation attributes that is stable and guards against overfitting to provide an effective modeling approach for feedforward control (FFC). This approach is a Wiener block-oriented methodology, which has unique attributes for meeting critical requirements for effective, long-term, FFC.

  3. Image Feature Types and Their Predictions of Aesthetic Preference and Naturalness

    PubMed Central

    Ibarra, Frank F.; Kardan, Omid; Hunter, MaryCarol R.; Kotabe, Hiroki P.; Meyer, Francisco A. C.; Berman, Marc G.

    2017-01-01

    Previous research has investigated ways to quantify visual information of a scene in terms of a visual processing hierarchy, i.e., making sense of visual environment by segmentation and integration of elementary sensory input. Guided by this research, studies have developed categories for low-level visual features (e.g., edges, colors), high-level visual features (scene-level entities that convey semantic information such as objects), and how models of those features predict aesthetic preference and naturalness. For example, in Kardan et al. (2015a), 52 participants provided aesthetic preference and naturalness ratings, which are used in the current study, for 307 images of mixed natural and urban content. Kardan et al. (2015a) then developed a model using low-level features to predict aesthetic preference and naturalness and could do so with high accuracy. What has yet to be explored is the ability of higher-level visual features (e.g., horizon line position relative to viewer, geometry of building distribution relative to visual access) to predict aesthetic preference and naturalness of scenes, and whether higher-level features mediate some of the association between the low-level features and aesthetic preference or naturalness. In this study we investigated these relationships and found that low- and high- level features explain 68.4% of the variance in aesthetic preference ratings and 88.7% of the variance in naturalness ratings. Additionally, several high-level features mediated the relationship between the low-level visual features and aaesthetic preference. In a multiple mediation analysis, the high-level feature mediators accounted for over 50% of the variance in predicting aesthetic preference. These results show that high-level visual features play a prominent role predicting aesthetic preference, but do not completely eliminate the predictive power of the low-level visual features. These strong predictors provide powerful insights for future research relating to landscape and urban design with the aim of maximizing subjective well-being, which could lead to improved health outcomes on a larger scale. PMID:28503158

  4. Stereoscopic Feature Tracking System for Retrieving Velocity of Surface Waters

    NASA Astrophysics Data System (ADS)

    Zuniga Zamalloa, C. C.; Landry, B. J.

    2017-12-01

    The present work is concerned with the surface velocity retrieval of flows using a stereoscopic setup and finding the correspondence in the images via feature tracking (FT). The feature tracking provides a key benefit of substantially reducing the level of user input. In contrast to other commonly used methods (e.g., normalized cross-correlation), FT does not require the user to prescribe interrogation window sizes and removes the need for masking when specularities are present. The results of the current FT methodology are comparable to those obtained via Large Scale Particle Image Velocimetry while requiring little to no user input which allowed for rapid, automated processing of imagery.

  5. Role Discovery in Graphs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2014-08-14

    RolX takes the features from Re-FeX or any other feature matrix as input and outputs role assignments (clusters). The output of RolX is a csv file containing the node-role memberships and a csv file containing the role-feature definitions.

  6. Automated simultaneous multiple feature classification of MTI data

    NASA Astrophysics Data System (ADS)

    Harvey, Neal R.; Theiler, James P.; Balick, Lee K.; Pope, Paul A.; Szymanski, John J.; Perkins, Simon J.; Porter, Reid B.; Brumby, Steven P.; Bloch, Jeffrey J.; David, Nancy A.; Galassi, Mark C.

    2002-08-01

    Los Alamos National Laboratory has developed and demonstrated a highly capable system, GENIE, for the two-class problem of detecting a single feature against a background of non-feature. In addition to the two-class case, however, a commonly encountered remote sensing task is the segmentation of multispectral image data into a larger number of distinct feature classes or land cover types. To this end we have extended our existing system to allow the simultaneous classification of multiple features/classes from multispectral data. The technique builds on previous work and its core continues to utilize a hybrid evolutionary-algorithm-based system capable of searching for image processing pipelines optimized for specific image feature extraction tasks. We describe the improvements made to the GENIE software to allow multiple-feature classification and describe the application of this system to the automatic simultaneous classification of multiple features from MTI image data. We show the application of the multiple-feature classification technique to the problem of classifying lava flows on Mauna Loa volcano, Hawaii, using MTI image data and compare the classification results with standard supervised multiple-feature classification techniques.

  7. Face recognition via Gabor and convolutional neural network

    NASA Astrophysics Data System (ADS)

    Lu, Tongwei; Wu, Menglu; Lu, Tao

    2018-04-01

    In recent years, the powerful feature learning and classification ability of convolutional neural network have attracted widely attention. Compared with the deep learning, the traditional machine learning algorithm has a good explanatory which deep learning does not have. Thus, In this paper, we propose a method to extract the feature of the traditional algorithm as the input of convolution neural network. In order to reduce the complexity of the network, the kernel function of Gabor wavelet is used to extract the feature from different position, frequency and direction of target image. It is sensitive to edge of image which can provide good direction and scale selection. The extraction of the image from eight directions on a scale are as the input of network that we proposed. The network have the advantage of weight sharing and local connection and texture feature of the input image can reduce the influence of facial expression, gesture and illumination. At the same time, we introduced a layer which combined the results of the pooling and convolution can extract deeper features. The training network used the open source caffe framework which is beneficial to feature extraction. The experiment results of the proposed method proved that the network structure effectively overcame the barrier of illumination and had a good robustness as well as more accurate and rapid than the traditional algorithm.

  8. Automated method for the systematic interpretation of resonance peaks in spectrum data

    DOEpatents

    Damiano, B.; Wood, R.T.

    1997-04-22

    A method is described for spectral signature interpretation. The method includes the creation of a mathematical model of a system or process. A neural network training set is then developed based upon the mathematical model. The neural network training set is developed by using the mathematical model to generate measurable phenomena of the system or process based upon model input parameter that correspond to the physical condition of the system or process. The neural network training set is then used to adjust internal parameters of a neural network. The physical condition of an actual system or process represented by the mathematical model is then monitored by extracting spectral features from measured spectra of the actual process or system. The spectral features are then input into said neural network to determine the physical condition of the system or process represented by the mathematical model. More specifically, the neural network correlates the spectral features (i.e. measurable phenomena) of the actual process or system with the corresponding model input parameters. The model input parameters relate to specific components of the system or process, and, consequently, correspond to the physical condition of the process or system. 1 fig.

  9. Automated method for the systematic interpretation of resonance peaks in spectrum data

    DOEpatents

    Damiano, Brian; Wood, Richard T.

    1997-01-01

    A method for spectral signature interpretation. The method includes the creation of a mathematical model of a system or process. A neural network training set is then developed based upon the mathematical model. The neural network training set is developed by using the mathematical model to generate measurable phenomena of the system or process based upon model input parameter that correspond to the physical condition of the system or process. The neural network training set is then used to adjust internal parameters of a neural network. The physical condition of an actual system or process represented by the mathematical model is then monitored by extracting spectral features from measured spectra of the actual process or system. The spectral features are then input into said neural network to determine the physical condition of the system or process represented by the mathematical. More specifically, the neural network correlates the spectral features (i.e. measurable phenomena) of the actual process or system with the corresponding model input parameters. The model input parameters relate to specific components of the system or process, and, consequently, correspond to the physical condition of the process or system.

  10. Wide bandwidth phase-locked loop circuit

    NASA Technical Reports Server (NTRS)

    Koudelka, Robert David (Inventor)

    2005-01-01

    A PLL circuit uses a multiple frequency range PLL in order to phase lock input signals having a wide range of frequencies. The PLL includes a VCO capable of operating in multiple different frequency ranges and a divider bank independently configurable to divide the output of the VCO. A frequency detector detects a frequency of the input signal and a frequency selector selects an appropriate frequency range for the PLL. The frequency selector automatically switches the PLL to a different frequency range as needed in response to a change in the input signal frequency. Frequency range hysteresis is implemented to avoid operating the PLL near a frequency range boundary.

  11. Compensator improvement for multivariable control systems

    NASA Technical Reports Server (NTRS)

    Mitchell, J. R.; Mcdaniel, W. L., Jr.; Gresham, L. L.

    1977-01-01

    A theory and the associated numerical technique are developed for an iterative design improvement of the compensation for linear, time-invariant control systems with multiple inputs and multiple outputs. A strict constraint algorithm is used in obtaining a solution of the specified constraints of the control design. The result of the research effort is the multiple input, multiple output Compensator Improvement Program (CIP). The objective of the Compensator Improvement Program is to modify in an iterative manner the free parameters of the dynamic compensation matrix so that the system satisfies frequency domain specifications. In this exposition, the underlying principles of the multivariable CIP algorithm are presented and the practical utility of the program is illustrated with space vehicle related examples.

  12. Brief announcement: Hypergraph parititioning for parallel sparse matrix-matrix multiplication

    DOE PAGES

    Ballard, Grey; Druinsky, Alex; Knight, Nicholas; ...

    2015-01-01

    The performance of parallel algorithms for sparse matrix-matrix multiplication is typically determined by the amount of interprocessor communication performed, which in turn depends on the nonzero structure of the input matrices. In this paper, we characterize the communication cost of a sparse matrix-matrix multiplication algorithm in terms of the size of a cut of an associated hypergraph that encodes the computation for a given input nonzero structure. Obtaining an optimal algorithm corresponds to solving a hypergraph partitioning problem. Furthermore, our hypergraph model generalizes several existing models for sparse matrix-vector multiplication, and we can leverage hypergraph partitioners developed for that computationmore » to improve application-specific algorithms for multiplying sparse matrices.« less

  13. AQUATOX Features and Tools

    EPA Pesticide Factsheets

    Numerous features have been included to facilitate the modeling process, from model setup and data input, presentation and analysis of results, to easy export of results to spreadsheet programs for additional analysis.

  14. A soft computing based approach using modified selection strategy for feature reduction of medical systems.

    PubMed

    Zuhtuogullari, Kursat; Allahverdi, Novruz; Arikan, Nihat

    2013-01-01

    The systems consisting high input spaces require high processing times and memory usage. Most of the attribute selection algorithms have the problems of input dimensions limits and information storage problems. These problems are eliminated by means of developed feature reduction software using new modified selection mechanism with middle region solution candidates adding. The hybrid system software is constructed for reducing the input attributes of the systems with large number of input variables. The designed software also supports the roulette wheel selection mechanism. Linear order crossover is used as the recombination operator. In the genetic algorithm based soft computing methods, locking to the local solutions is also a problem which is eliminated by using developed software. Faster and effective results are obtained in the test procedures. Twelve input variables of the urological system have been reduced to the reducts (reduced input attributes) with seven, six, and five elements. It can be seen from the obtained results that the developed software with modified selection has the advantages in the fields of memory allocation, execution time, classification accuracy, sensitivity, and specificity values when compared with the other reduction algorithms by using the urological test data.

  15. A Soft Computing Based Approach Using Modified Selection Strategy for Feature Reduction of Medical Systems

    PubMed Central

    Zuhtuogullari, Kursat; Allahverdi, Novruz; Arikan, Nihat

    2013-01-01

    The systems consisting high input spaces require high processing times and memory usage. Most of the attribute selection algorithms have the problems of input dimensions limits and information storage problems. These problems are eliminated by means of developed feature reduction software using new modified selection mechanism with middle region solution candidates adding. The hybrid system software is constructed for reducing the input attributes of the systems with large number of input variables. The designed software also supports the roulette wheel selection mechanism. Linear order crossover is used as the recombination operator. In the genetic algorithm based soft computing methods, locking to the local solutions is also a problem which is eliminated by using developed software. Faster and effective results are obtained in the test procedures. Twelve input variables of the urological system have been reduced to the reducts (reduced input attributes) with seven, six, and five elements. It can be seen from the obtained results that the developed software with modified selection has the advantages in the fields of memory allocation, execution time, classification accuracy, sensitivity, and specificity values when compared with the other reduction algorithms by using the urological test data. PMID:23573172

  16. Intelligent Fault Diagnosis of HVCB with Feature Space Optimization-Based Random Forest

    PubMed Central

    Ma, Suliang; Wu, Jianwen; Wang, Yuhao; Jia, Bowen; Jiang, Yuan

    2018-01-01

    Mechanical faults of high-voltage circuit breakers (HVCBs) always happen over long-term operation, so extracting the fault features and identifying the fault type have become a key issue for ensuring the security and reliability of power supply. Based on wavelet packet decomposition technology and random forest algorithm, an effective identification system was developed in this paper. First, compared with the incomplete description of Shannon entropy, the wavelet packet time-frequency energy rate (WTFER) was adopted as the input vector for the classifier model in the feature selection procedure. Then, a random forest classifier was used to diagnose the HVCB fault, assess the importance of the feature variable and optimize the feature space. Finally, the approach was verified based on actual HVCB vibration signals by considering six typical fault classes. The comparative experiment results show that the classification accuracy of the proposed method with the origin feature space reached 93.33% and reached up to 95.56% with optimized input feature vector of classifier. This indicates that feature optimization procedure is successful, and the proposed diagnosis algorithm has higher efficiency and robustness than traditional methods. PMID:29659548

  17. Tele-Autonomous control involving contact. Final Report Thesis; [object localization

    NASA Technical Reports Server (NTRS)

    Shao, Lejun; Volz, Richard A.; Conway, Lynn; Walker, Michael W.

    1990-01-01

    Object localization and its application in tele-autonomous systems are studied. Two object localization algorithms are presented together with the methods of extracting several important types of object features. The first algorithm is based on line-segment to line-segment matching. Line range sensors are used to extract line-segment features from an object. The extracted features are matched to corresponding model features to compute the location of the object. The inputs of the second algorithm are not limited only to the line features. Featured points (point to point matching) and featured unit direction vectors (vector to vector matching) can also be used as the inputs of the algorithm, and there is no upper limit on the number of the features inputed. The algorithm will allow the use of redundant features to find a better solution. The algorithm uses dual number quaternions to represent the position and orientation of an object and uses the least squares optimization method to find an optimal solution for the object's location. The advantage of using this representation is that the method solves for the location estimation by minimizing a single cost function associated with the sum of the orientation and position errors and thus has a better performance on the estimation, both in accuracy and speed, than that of other similar algorithms. The difficulties when the operator is controlling a remote robot to perform manipulation tasks are also discussed. The main problems facing the operator are time delays on the signal transmission and the uncertainties of the remote environment. How object localization techniques can be used together with other techniques such as predictor display and time desynchronization to help to overcome these difficulties are then discussed.

  18. Feature extraction for ultrasonic sensor based defect detection in ceramic components

    NASA Astrophysics Data System (ADS)

    Kesharaju, Manasa; Nagarajah, Romesh

    2014-02-01

    High density silicon carbide materials are commonly used as the ceramic element of hard armour inserts used in traditional body armour systems to reduce their weight, while providing improved hardness, strength and elastic response to stress. Currently, armour ceramic tiles are inspected visually offline using an X-ray technique that is time consuming and very expensive. In addition, from X-rays multiple defects are also misinterpreted as single defects. Therefore, to address these problems the ultrasonic non-destructive approach is being investigated. Ultrasound based inspection would be far more cost effective and reliable as the methodology is applicable for on-line quality control including implementation of accept/reject criteria. This paper describes a recently developed methodology to detect, locate and classify various manufacturing defects in ceramic tiles using sub band coding of ultrasonic test signals. The wavelet transform is applied to the ultrasonic signal and wavelet coefficients in the different frequency bands are extracted and used as input features to an artificial neural network (ANN) for purposes of signal classification. Two different classifiers, using artificial neural networks (supervised) and clustering (un-supervised) are supplied with features selected using Principal Component Analysis(PCA) and their classification performance compared. This investigation establishes experimentally that Principal Component Analysis(PCA) can be effectively used as a feature selection method that provides superior results for classifying various defects in the context of ultrasonic inspection in comparison with the X-ray technique.

  19. A reverberation-time-aware DNN approach leveraging spatial information for microphone array dereverberation

    NASA Astrophysics Data System (ADS)

    Wu, Bo; Yang, Minglei; Li, Kehuang; Huang, Zhen; Siniscalchi, Sabato Marco; Wang, Tong; Lee, Chin-Hui

    2017-12-01

    A reverberation-time-aware deep-neural-network (DNN)-based multi-channel speech dereverberation framework is proposed to handle a wide range of reverberation times (RT60s). There are three key steps in designing a robust system. First, to accomplish simultaneous speech dereverberation and beamforming, we propose a framework, namely DNNSpatial, by selectively concatenating log-power spectral (LPS) input features of reverberant speech from multiple microphones in an array and map them into the expected output LPS features of anechoic reference speech based on a single deep neural network (DNN). Next, the temporal auto-correlation function of received signals at different RT60s is investigated to show that RT60-dependent temporal-spatial contexts in feature selection are needed in the DNNSpatial training stage in order to optimize the system performance in diverse reverberant environments. Finally, the RT60 is estimated to select the proper temporal and spatial contexts before feeding the log-power spectrum features to the trained DNNs for speech dereverberation. The experimental evidence gathered in this study indicates that the proposed framework outperforms the state-of-the-art signal processing dereverberation algorithm weighted prediction error (WPE) and conventional DNNSpatial systems without taking the reverberation time into account, even for extremely weak and severe reverberant conditions. The proposed technique generalizes well to unseen room size, array geometry and loudspeaker position, and is robust to reverberation time estimation error.

  20. Software-hardware complex for the input of telemetric information obtained from rocket studies of the radiation of the earth's upper atmosphere

    NASA Astrophysics Data System (ADS)

    Bazdrov, I. I.; Bortkevich, V. S.; Khokhlov, V. N.

    2004-10-01

    This paper describes a software-hardware complex for the input into a personal computer of telemetric information obtained by means of telemetry stations TRAL KR28, RTS-8, and TRAL K2N. Structural and functional diagrams are given of the input device and the hardware complex. Results that characterize the features of the input process and selective data of optical measurements of atmospheric radiation are given. © 2004

  1. Computer vision-based method for classification of wheat grains using artificial neural network.

    PubMed

    Sabanci, Kadir; Kayabasi, Ahmet; Toktas, Abdurrahim

    2017-06-01

    A simplified computer vision-based application using artificial neural network (ANN) depending on multilayer perceptron (MLP) for accurately classifying wheat grains into bread or durum is presented. The images of 100 bread and 100 durum wheat grains are taken via a high-resolution camera and subjected to pre-processing. The main visual features of four dimensions, three colors and five textures are acquired using image-processing techniques (IPTs). A total of 21 visual features are reproduced from the 12 main features to diversify the input population for training and testing the ANN model. The data sets of visual features are considered as input parameters of the ANN model. The ANN with four different input data subsets is modelled to classify the wheat grains into bread or durum. The ANN model is trained with 180 grains and its accuracy tested with 20 grains from a total of 200 wheat grains. Seven input parameters that are most effective on the classifying results are determined using the correlation-based CfsSubsetEval algorithm to simplify the ANN model. The results of the ANN model are compared in terms of accuracy rate. The best result is achieved with a mean absolute error (MAE) of 9.8 × 10 -6 by the simplified ANN model. This shows that the proposed classifier based on computer vision can be successfully exploited to automatically classify a variety of grains. © 2016 Society of Chemical Industry. © 2016 Society of Chemical Industry.

  2. Tensor Rank Preserving Discriminant Analysis for Facial Recognition.

    PubMed

    Tao, Dapeng; Guo, Yanan; Li, Yaotang; Gao, Xinbo

    2017-10-12

    Facial recognition, one of the basic topics in computer vision and pattern recognition, has received substantial attention in recent years. However, for those traditional facial recognition algorithms, the facial images are reshaped to a long vector, thereby losing part of the original spatial constraints of each pixel. In this paper, a new tensor-based feature extraction algorithm termed tensor rank preserving discriminant analysis (TRPDA) for facial image recognition is proposed; the proposed method involves two stages: in the first stage, the low-dimensional tensor subspace of the original input tensor samples was obtained; in the second stage, discriminative locality alignment was utilized to obtain the ultimate vector feature representation for subsequent facial recognition. On the one hand, the proposed TRPDA algorithm fully utilizes the natural structure of the input samples, and it applies an optimization criterion that can directly handle the tensor spectral analysis problem, thereby decreasing the computation cost compared those traditional tensor-based feature selection algorithms. On the other hand, the proposed TRPDA algorithm extracts feature by finding a tensor subspace that preserves most of the rank order information of the intra-class input samples. Experiments on the three facial databases are performed here to determine the effectiveness of the proposed TRPDA algorithm.

  3. Integrated readout electronics for Belle II pixel detector

    NASA Astrophysics Data System (ADS)

    Blanco, R.; Leys, R.; Perić, I.

    2018-03-01

    This paper describes the readout components for Belle II that have been designed as integrated circuits. The ICs are connected to DEPFET sensor by bump bonding. Three types of ICs have been developed: SWITCHER for pixel matrix control, DCD for readout and digitizing of sensor signals and DHP for digital data processing. The ICs are radiation tolerant and use several novel features, such as the multiple-input differential amplifiers and the fast and radiation hard high-voltage drivers. SWITCHER and DCD have been developed at University of Heidelberg, Karlsruhe Institute of Technology (KIT) and DHP at Bonn University. The IC-development started in 2009 and was accomplished in 2016 with the submissions of final designs. The final ICs for Belle II pixel detector and the related measurement results will be presented in this contribution.

  4. Multiband Reconfigurable Harmonically Tuned Gallium Nitride (GaN) Solid-State Power Amplifier (SSPA) for Cognitive Radios

    NASA Technical Reports Server (NTRS)

    Waldstein, Seth W.; Kortright, Barbosa Miguel A.; Simons, Rainee N.

    2017-01-01

    The paper presents the architecture of a wideband reconfigurable harmonically-tuned Gallium Nitride (GaN) Solid State Power Amplifier (SSPA) for cognitive radios. When interfaced with the physical layer of a cognitive communication system, this amplifier topology offers broadband high efficiency through the use of multiple tuned input/output matching networks. This feature enables the cognitive radio to reconfigure the operating frequency without sacrificing efficiency. This paper additionally presents as a proof-of-concept the design, fabrication, and test results for a GaN inverse Class-F type amplifier operating at X-band (8.4 GHz) that achieves a maximum output power of 5.14-W, Power Added Efficiency (PAE) of 38.6 percent, and Drain Efficiency (DE) of 48.9 percent under continuous wave (CW) operation.

  5. Spontaneous Magnetic Alignment by Yearling Snapping Turtles: Rapid Association of Radio Frequency Dependent Pattern of Magnetic Input with Novel Surroundings

    PubMed Central

    Landler, Lukas; Painter, Michael S.; Youmans, Paul W.; Hopkins, William A.; Phillips, John B.

    2015-01-01

    We investigated spontaneous magnetic alignment (SMA) by juvenile snapping turtles using exposure to low-level radio frequency (RF) fields at the Larmor frequency to help characterize the underlying sensory mechanism. Turtles, first introduced to the testing environment without the presence of RF aligned consistently towards magnetic north when subsequent magnetic testing conditions were also free of RF (‘RF off → RF off’), but were disoriented when subsequently exposed to RF (‘RF off → RF on’). In contrast, animals initially introduced to the testing environment with RF present were disoriented when tested without RF (‘RF on → RF off’), but aligned towards magnetic south when tested with RF (‘RF on → RF on’). Sensitivity of the SMA response of yearling turtles to RF is consistent with the involvement of a radical pair mechanism. Furthermore, the effect of RF appears to result from a change in the pattern of magnetic input, rather than elimination of magnetic input altogether, as proposed to explain similar effects in other systems/organisms. The findings show that turtles first exposed to a novel environment form a lasting association between the pattern of magnetic input and their surroundings. However, under natural conditions turtles would never experience a change in the pattern of magnetic input. Therefore, if turtles form a similar association of magnetic cues with the surroundings each time they encounter unfamiliar habitat, as seems likely, the same pattern of magnetic input would be associated with multiple sites/localities. This would be expected from a sensory input that functions as a global reference frame, helping to place multiple locales (i.e., multiple local landmark arrays) into register to form a global map of familiar space. PMID:25978736

  6. Spontaneous magnetic alignment by yearling snapping turtles: rapid association of radio frequency dependent pattern of magnetic input with novel surroundings.

    PubMed

    Landler, Lukas; Painter, Michael S; Youmans, Paul W; Hopkins, William A; Phillips, John B

    2015-01-01

    We investigated spontaneous magnetic alignment (SMA) by juvenile snapping turtles using exposure to low-level radio frequency (RF) fields at the Larmor frequency to help characterize the underlying sensory mechanism. Turtles, first introduced to the testing environment without the presence of RF aligned consistently towards magnetic north when subsequent magnetic testing conditions were also free of RF ('RF off → RF off'), but were disoriented when subsequently exposed to RF ('RF off → RF on'). In contrast, animals initially introduced to the testing environment with RF present were disoriented when tested without RF ('RF on → RF off'), but aligned towards magnetic south when tested with RF ('RF on → RF on'). Sensitivity of the SMA response of yearling turtles to RF is consistent with the involvement of a radical pair mechanism. Furthermore, the effect of RF appears to result from a change in the pattern of magnetic input, rather than elimination of magnetic input altogether, as proposed to explain similar effects in other systems/organisms. The findings show that turtles first exposed to a novel environment form a lasting association between the pattern of magnetic input and their surroundings. However, under natural conditions turtles would never experience a change in the pattern of magnetic input. Therefore, if turtles form a similar association of magnetic cues with the surroundings each time they encounter unfamiliar habitat, as seems likely, the same pattern of magnetic input would be associated with multiple sites/localities. This would be expected from a sensory input that functions as a global reference frame, helping to place multiple locales (i.e., multiple local landmark arrays) into register to form a global map of familiar space.

  7. Automatic Boosted Flood Mapping from Satellite Data

    NASA Technical Reports Server (NTRS)

    Coltin, Brian; McMichael, Scott; Smith, Trey; Fong, Terrence

    2016-01-01

    Numerous algorithms have been proposed to map floods from Moderate Resolution Imaging Spectroradiometer (MODIS) imagery. However, most require human input to succeed, either to specify a threshold value or to manually annotate training data. We introduce a new algorithm based on Adaboost which effectively maps floods without any human input, allowing for a truly rapid and automatic response. The Adaboost algorithm combines multiple thresholds to achieve results comparable to state-of-the-art algorithms which do require human input. We evaluate Adaboost, as well as numerous previously proposed flood mapping algorithms, on multiple MODIS flood images, as well as on hundreds of non-flood MODIS lake images, demonstrating its effectiveness across a wide variety of conditions.

  8. Identification and Description of Alternative Means of Accomplishing IMS Operational Features.

    ERIC Educational Resources Information Center

    Dave, Ashok

    The operational features of feasible alternative configurations for a computer-based instructional management system are identified. Potential alternative means and components of accomplishing these features are briefly described. Included are aspects of data collection, data input, data transmission, data reception, scanning and processing,…

  9. Control and optimization system

    DOEpatents

    Xinsheng, Lou

    2013-02-12

    A system for optimizing a power plant includes a chemical loop having an input for receiving an input parameter (270) and an output for outputting an output parameter (280), a control system operably connected to the chemical loop and having a multiple controller part (230) comprising a model-free controller. The control system receives the output parameter (280), optimizes the input parameter (270) based on the received output parameter (280), and outputs an optimized input parameter (270) to the input of the chemical loop to control a process of the chemical loop in an optimized manner.

  10. User's Guide for the Updated EST/BEST Software System

    NASA Technical Reports Server (NTRS)

    Shah, Ashwin

    2003-01-01

    This User's Guide describes the structure of the IPACS input file that reflects the modularity of each module. The structured format helps the user locate specific input data and manually enter or edit it. The IPACS input file can have any user-specified filename, but must have a DAT extension. The input file may consist of up to six input data blocks; the data blocks must be separated by delimiters beginning with the $ character. If multiple sections are desired, they must be arranged in the order listed.

  11. Quantity and Quality of Caregivers' Linguistic Input to 18-Month and 3-Year-Old Children Who Are Hard of Hearing.

    PubMed

    Ambrose, Sophie E; Walker, Elizabeth A; Unflat-Berry, Lauren M; Oleson, Jacob J; Moeller, Mary Pat

    2015-01-01

    The primary objective of this study was to examine the quantity and quality of caregiver talk directed to children who are hard of hearing (CHH) compared with children with normal hearing (CNH). For the CHH only, the study explored how caregiver input changed as a function of child age (18 months versus 3 years), which child and family factors contributed to variance in caregiver linguistic input at 18 months and 3 years, and how caregiver talk at 18 months related to child language outcomes at 3 years. Participants were 59 CNH and 156 children with bilateral, mild-to-severe hearing loss. When children were approximately 18 months and/or 3 years of age, caregivers and children participated in a 5-min semistructured, conversational interaction. Interactions were transcribed and coded for two features of caregiver input representing quantity (number of total utterances and number of total words) and four features representing quality (number of different words, mean length of utterance in morphemes, proportion of utterances that were high level, and proportion of utterances that were directing). In addition, at the 18-month visit, parents completed a standardized questionnaire regarding their child's communication development. At the 3-year visit, a clinician administered a standardized language measure. At the 18-month visit, the CHH were exposed to a greater proportion of directing utterances than the CNH. At the 3-year visit, there were significant differences between the CNH and CHH for number of total words and all four of the quality variables, with the CHH being exposed to fewer words and lower quality input. Caregivers generally provided higher quality input to CHH at the 3-year visit compared with the 18-month visit. At the 18-month visit, quantity variables, but not quality variables, were related to several child and family factors. At the 3-year visit, the variable most strongly related to caregiver input was child language. Longitudinal analyses indicated that quality, but not quantity, of caregiver linguistic input at 18 months was related to child language abilities at 3 years, with directing utterances accounting for significant unique variance in child language outcomes. Although caregivers of CHH increased their use of quality features of linguistic input over time, the differences when compared with CNH suggest that some caregivers may need additional support to provide their children with optimal language learning environments. This is particularly important given the relationships that were identified between quality features of caregivers' linguistic input and children's language abilities. Family supports should include a focus on developing a style that is conversational eliciting as opposed to directive.

  12. Automatic plankton image classification combining multiple view features via multiple kernel learning.

    PubMed

    Zheng, Haiyong; Wang, Ruchen; Yu, Zhibin; Wang, Nan; Gu, Zhaorui; Zheng, Bing

    2017-12-28

    Plankton, including phytoplankton and zooplankton, are the main source of food for organisms in the ocean and form the base of marine food chain. As the fundamental components of marine ecosystems, plankton is very sensitive to environment changes, and the study of plankton abundance and distribution is crucial, in order to understand environment changes and protect marine ecosystems. This study was carried out to develop an extensive applicable plankton classification system with high accuracy for the increasing number of various imaging devices. Literature shows that most plankton image classification systems were limited to only one specific imaging device and a relatively narrow taxonomic scope. The real practical system for automatic plankton classification is even non-existent and this study is partly to fill this gap. Inspired by the analysis of literature and development of technology, we focused on the requirements of practical application and proposed an automatic system for plankton image classification combining multiple view features via multiple kernel learning (MKL). For one thing, in order to describe the biomorphic characteristics of plankton more completely and comprehensively, we combined general features with robust features, especially by adding features like Inner-Distance Shape Context for morphological representation. For another, we divided all the features into different types from multiple views and feed them to multiple classifiers instead of only one by combining different kernel matrices computed from different types of features optimally via multiple kernel learning. Moreover, we also applied feature selection method to choose the optimal feature subsets from redundant features for satisfying different datasets from different imaging devices. We implemented our proposed classification system on three different datasets across more than 20 categories from phytoplankton to zooplankton. The experimental results validated that our system outperforms state-of-the-art plankton image classification systems in terms of accuracy and robustness. This study demonstrated automatic plankton image classification system combining multiple view features using multiple kernel learning. The results indicated that multiple view features combined by NLMKL using three kernel functions (linear, polynomial and Gaussian kernel functions) can describe and use information of features better so that achieve a higher classification accuracy.

  13. Simulation requirements for the Large Deployable Reflector (LDR)

    NASA Technical Reports Server (NTRS)

    Soosaar, K.

    1984-01-01

    Simulation tools for the large deployable reflector (LDR) are discussed. These tools are often the transfer function variety equations. However, transfer functions are inadequate to represent time-varying systems for multiple control systems with overlapping bandwidths characterized by multi-input, multi-output features. Frequency domain approaches are the useful design tools, but a full-up simulation is needed. Because of the need for a dedicated computer for high frequency multi degree of freedom components encountered, non-real time smulation is preferred. Large numerical analysis software programs are useful only to receive inputs and provide output to the next block, and should be kept out of the direct loop of simulation. The following blocks make up the simulation. The thermal model block is a classical heat transfer program. It is a non-steady state program. The quasistatic block deals with problems associated with rigid body control of reflector segments. The steady state block assembles data into equations of motion and dynamics. A differential raytrace is obtained to establish a change in wave aberrations. The observation scene is described. The focal plane module converts the photon intensity impinging on it into electron streams or into permanent film records.

  14. Artificial spatiotemporal touch inputs reveal complementary decoding in neocortical neurons.

    PubMed

    Oddo, Calogero M; Mazzoni, Alberto; Spanne, Anton; Enander, Jonas M D; Mogensen, Hannes; Bengtsson, Fredrik; Camboni, Domenico; Micera, Silvestro; Jörntell, Henrik

    2017-04-04

    Investigations of the mechanisms of touch perception and decoding has been hampered by difficulties in achieving invariant patterns of skin sensor activation. To obtain reproducible spatiotemporal patterns of activation of sensory afferents, we used an artificial fingertip equipped with an array of neuromorphic sensors. The artificial fingertip was used to transduce real-world haptic stimuli into spatiotemporal patterns of spikes. These spike patterns were delivered to the skin afferents of the second digit of rats via an array of stimulation electrodes. Combined with low-noise intra- and extracellular recordings from neocortical neurons in vivo, this approach provided a previously inaccessible high resolution analysis of the representation of tactile information in the neocortical neuronal circuitry. The results indicate high information content in individual neurons and reveal multiple novel neuronal tactile coding features such as heterogeneous and complementary spatiotemporal input selectivity also between neighboring neurons. Such neuronal heterogeneity and complementariness can potentially support a very high decoding capacity in a limited population of neurons. Our results also indicate a potential neuroprosthetic approach to communicate with the brain at a very high resolution and provide a potential novel solution for evaluating the degree or state of neurological disease in animal models.

  15. Artificial spatiotemporal touch inputs reveal complementary decoding in neocortical neurons

    PubMed Central

    Oddo, Calogero M.; Mazzoni, Alberto; Spanne, Anton; Enander, Jonas M. D.; Mogensen, Hannes; Bengtsson, Fredrik; Camboni, Domenico; Micera, Silvestro; Jörntell, Henrik

    2017-01-01

    Investigations of the mechanisms of touch perception and decoding has been hampered by difficulties in achieving invariant patterns of skin sensor activation. To obtain reproducible spatiotemporal patterns of activation of sensory afferents, we used an artificial fingertip equipped with an array of neuromorphic sensors. The artificial fingertip was used to transduce real-world haptic stimuli into spatiotemporal patterns of spikes. These spike patterns were delivered to the skin afferents of the second digit of rats via an array of stimulation electrodes. Combined with low-noise intra- and extracellular recordings from neocortical neurons in vivo, this approach provided a previously inaccessible high resolution analysis of the representation of tactile information in the neocortical neuronal circuitry. The results indicate high information content in individual neurons and reveal multiple novel neuronal tactile coding features such as heterogeneous and complementary spatiotemporal input selectivity also between neighboring neurons. Such neuronal heterogeneity and complementariness can potentially support a very high decoding capacity in a limited population of neurons. Our results also indicate a potential neuroprosthetic approach to communicate with the brain at a very high resolution and provide a potential novel solution for evaluating the degree or state of neurological disease in animal models. PMID:28374841

  16. Decoding Speech With Integrated Hybrid Signals Recorded From the Human Ventral Motor Cortex.

    PubMed

    Ibayashi, Kenji; Kunii, Naoto; Matsuo, Takeshi; Ishishita, Yohei; Shimada, Seijiro; Kawai, Kensuke; Saito, Nobuhito

    2018-01-01

    Restoration of speech communication for locked-in patients by means of brain computer interfaces (BCIs) is currently an important area of active research. Among the neural signals obtained from intracranial recordings, single/multi-unit activity (SUA/MUA), local field potential (LFP), and electrocorticography (ECoG) are good candidates for an input signal for BCIs. However, the question of which signal or which combination of the three signal modalities is best suited for decoding speech production remains unverified. In order to record SUA, LFP, and ECoG simultaneously from a highly localized area of human ventral sensorimotor cortex (vSMC), we fabricated an electrode the size of which was 7 by 13 mm containing sparsely arranged microneedle and conventional macro contacts. We determined which signal modality is the most capable of decoding speech production, and tested if the combination of these signals could improve the decoding accuracy of spoken phonemes. Feature vectors were constructed from spike frequency obtained from SUAs and event-related spectral perturbation derived from ECoG and LFP signals, then input to the decoder. The results showed that the decoding accuracy for five spoken vowels was highest when features from multiple signals were combined and optimized for each subject, and reached 59% when averaged across all six subjects. This result suggests that multi-scale signals convey complementary information for speech articulation. The current study demonstrated that simultaneous recording of multi-scale neuronal activities could raise decoding accuracy even though the recording area is limited to a small portion of cortex, which is advantageous for future implementation of speech-assisting BCIs.

  17. Decoding Speech With Integrated Hybrid Signals Recorded From the Human Ventral Motor Cortex

    PubMed Central

    Ibayashi, Kenji; Kunii, Naoto; Matsuo, Takeshi; Ishishita, Yohei; Shimada, Seijiro; Kawai, Kensuke; Saito, Nobuhito

    2018-01-01

    Restoration of speech communication for locked-in patients by means of brain computer interfaces (BCIs) is currently an important area of active research. Among the neural signals obtained from intracranial recordings, single/multi-unit activity (SUA/MUA), local field potential (LFP), and electrocorticography (ECoG) are good candidates for an input signal for BCIs. However, the question of which signal or which combination of the three signal modalities is best suited for decoding speech production remains unverified. In order to record SUA, LFP, and ECoG simultaneously from a highly localized area of human ventral sensorimotor cortex (vSMC), we fabricated an electrode the size of which was 7 by 13 mm containing sparsely arranged microneedle and conventional macro contacts. We determined which signal modality is the most capable of decoding speech production, and tested if the combination of these signals could improve the decoding accuracy of spoken phonemes. Feature vectors were constructed from spike frequency obtained from SUAs and event-related spectral perturbation derived from ECoG and LFP signals, then input to the decoder. The results showed that the decoding accuracy for five spoken vowels was highest when features from multiple signals were combined and optimized for each subject, and reached 59% when averaged across all six subjects. This result suggests that multi-scale signals convey complementary information for speech articulation. The current study demonstrated that simultaneous recording of multi-scale neuronal activities could raise decoding accuracy even though the recording area is limited to a small portion of cortex, which is advantageous for future implementation of speech-assisting BCIs. PMID:29674950

  18. Comprehensive Morpho-Electrotonic Analysis Shows 2 Distinct Classes of L2 and L3 Pyramidal Neurons in Human Temporal Cortex

    PubMed Central

    Deitcher, Yair; Eyal, Guy; Kanari, Lida; Verhoog, Matthijs B; Atenekeng Kahou, Guy Antoine; Mansvelder, Huibert D; de Kock, Christiaan P J; Segev, Idan

    2017-01-01

    Abstract There have been few quantitative characterizations of the morphological, biophysical, and cable properties of neurons in the human neocortex. We employed feature-based statistical methods on a rare data set of 60 3D reconstructed pyramidal neurons from L2 and L3 in the human temporal cortex (HL2/L3 PCs) removed after brain surgery. Of these cells, 25 neurons were also characterized physiologically. Thirty-two morphological features were analyzed (e.g., dendritic surface area, 36 333 ± 18 157 μm2; number of basal trees, 5.55 ± 1.47; dendritic diameter, 0.76 ± 0.28 μm). Eighteen features showed a significant gradual increase with depth from the pia (e.g., dendritic length and soma radius). The other features showed weak or no correlation with depth (e.g., dendritic diameter). The basal dendritic terminals in HL2/L3 PCs are particularly elongated, enabling multiple nonlinear processing units in these dendrites. Unlike the morphological features, the active biophysical features (e.g., spike shapes and rates) and passive/cable features (e.g., somatic input resistance, 47.68 ± 15.26 MΩ, membrane time constant, 12.03 ± 1.79 ms, average dendritic cable length, 0.99 ± 0.24) were depth-independent. A novel descriptor for apical dendritic topology yielded 2 distinct classes, termed hereby as “slim-tufted” and “profuse-tufted” HL2/L3 PCs; the latter class tends to fire at higher rates. Thus, our morpho-electrotonic analysis shows 2 distinct classes of HL2/L3 PCs. PMID:28968789

  19. The essence of yeast quiescence.

    PubMed

    De Virgilio, Claudio

    2012-03-01

    Like all microorganisms, yeast cells spend most of their natural lifetime in a reversible, quiescent state that is primarily induced by limitation for essential nutrients. Substantial progress has been made in defining the features of quiescent cells and the nutrient-signaling pathways that shape these features. A view that emerges from the wealth of new data is that yeast cells dynamically configure the quiescent state in response to nutritional challenges by using a set of key nutrient-signaling pathways, which (1) regulate pathway-specific effectors, (2) converge on a few regulatory nodes that bundle multiple inputs to communicate unified, graded responses, and (3) mutually modulate their competences to transmit signals. Here, I present an overview of our current understanding of the architecture of these pathways, focusing on how the corresponding core signaling protein kinases (i.e. PKA, TORC1, Snf1, and Pho85) are wired to ensure an adequate response to nutrient starvation, which enables cells to tide over decades, if not centuries, of famine. © 2011 Federation of European Microbiological Societies. Published by Blackwell Publishing Ltd. All rights reserved.

  20. Optical Associative Processors For Visual Perception"

    NASA Astrophysics Data System (ADS)

    Casasent, David; Telfer, Brian

    1988-05-01

    We consider various associative processor modifications required to allow these systems to be used for visual perception, scene analysis, and object recognition. For these applications, decisions on the class of the objects present in the input image are required and thus heteroassociative memories are necessary (rather than the autoassociative memories that have been given most attention). We analyze the performance of both associative processors and note that there is considerable difference between heteroassociative and autoassociative memories. We describe associative processors suitable for realizing functions such as: distortion invariance (using linear discriminant function memory synthesis techniques), noise and image processing performance (using autoassociative memories in cascade with with a heteroassociative processor and with a finite number of autoassociative memory iterations employed), shift invariance (achieved through the use of associative processors operating on feature space data), and the analysis of multiple objects in high noise (which is achieved using associative processing of the output from symbolic correlators). We detail and provide initial demonstrations of the use of associative processors operating on iconic, feature space and symbolic data, as well as adaptive associative processors.

  1. A Smartphone-Based Driver Safety Monitoring System Using Data Fusion

    PubMed Central

    Lee, Boon-Giin; Chung, Wan-Young

    2012-01-01

    This paper proposes a method for monitoring driver safety levels using a data fusion approach based on several discrete data types: eye features, bio-signal variation, in-vehicle temperature, and vehicle speed. The driver safety monitoring system was developed in practice in the form of an application for an Android-based smartphone device, where measuring safety-related data requires no extra monetary expenditure or equipment. Moreover, the system provides high resolution and flexibility. The safety monitoring process involves the fusion of attributes gathered from different sensors, including video, electrocardiography, photoplethysmography, temperature, and a three-axis accelerometer, that are assigned as input variables to an inference analysis framework. A Fuzzy Bayesian framework is designed to indicate the driver’s capability level and is updated continuously in real-time. The sensory data are transmitted via Bluetooth communication to the smartphone device. A fake incoming call warning service alerts the driver if his or her safety level is suspiciously compromised. Realistic testing of the system demonstrates the practical benefits of multiple features and their fusion in providing a more authentic and effective driver safety monitoring. PMID:23247416

  2. A New Sparse Representation Framework for Reconstruction of an Isotropic High Spatial Resolution MR Volume From Orthogonal Anisotropic Resolution Scans.

    PubMed

    Jia, Yuanyuan; Gholipour, Ali; He, Zhongshi; Warfield, Simon K

    2017-05-01

    In magnetic resonance (MR), hardware limitations, scan time constraints, and patient movement often result in the acquisition of anisotropic 3-D MR images with limited spatial resolution in the out-of-plane views. Our goal is to construct an isotropic high-resolution (HR) 3-D MR image through upsampling and fusion of orthogonal anisotropic input scans. We propose a multiframe super-resolution (SR) reconstruction technique based on sparse representation of MR images. Our proposed algorithm exploits the correspondence between the HR slices and the low-resolution (LR) sections of the orthogonal input scans as well as the self-similarity of each input scan to train pairs of overcomplete dictionaries that are used in a sparse-land local model to upsample the input scans. The upsampled images are then combined using wavelet fusion and error backprojection to reconstruct an image. Features are learned from the data and no extra training set is needed. Qualitative and quantitative analyses were conducted to evaluate the proposed algorithm using simulated and clinical MR scans. Experimental results show that the proposed algorithm achieves promising results in terms of peak signal-to-noise ratio, structural similarity image index, intensity profiles, and visualization of small structures obscured in the LR imaging process due to partial volume effects. Our novel SR algorithm outperforms the nonlocal means (NLM) method using self-similarity, NLM method using self-similarity and image prior, self-training dictionary learning-based SR method, averaging of upsampled scans, and the wavelet fusion method. Our SR algorithm can reduce through-plane partial volume artifact by combining multiple orthogonal MR scans, and thus can potentially improve medical image analysis, research, and clinical diagnosis.

  3. A phantom design and assessment of lesion detectability in PET imaging

    NASA Astrophysics Data System (ADS)

    Wollenweber, Scott D.; Kinahan, Paul E.; Alessio, Adam M.

    2017-03-01

    The early detection of abnormal regions with increased tracer uptake in positron emission tomography (PET) is a key driver of imaging system design and optimization as well as choice of imaging protocols. Detectability, however, remains difficult to assess due to the need for realistic objects mimicking the clinical scene, multiple lesion-present and lesion-absent images and multiple observers. Fillable phantoms, with tradeoffs between complexity and utility, provide a means to quantitatively test and compare imaging systems under truth-known conditions. These phantoms, however, often focus on quantification rather than detectability. This work presents extensions to a novel phantom design and analysis techniques to evaluate detectability in the context of realistic, non-piecewise constant backgrounds. The design consists of a phantom filled with small solid plastic balls and a radionuclide solution to mimic heterogeneous background uptake. A set of 3D-printed regular dodecahedral `features' were included at user-defined locations within the phantom to create `holes' within the matrix of chaotically-packed balls. These features fill at approximately 3:1 contrast to the lumpy background. A series of signal-known-present (SP) and signal-known-absent (SA) sub-images were generated and used as input for observer studies. This design was imaged in a head-like 20 cm diameter, 20 cm long cylinder and in a body-like 36 cm wide by 21 cm tall by 40 cm long tank. A series of model observer detectability indices were compared across scan conditions (count levels, number of scan replicates), PET image reconstruction methods (with/without TOF and PSF) and between PET/CT scanner system designs using the same phantom imaged on multiple systems. The detectability index was further compared to the noise-equivalent count (NEC) level to characterize the relationship between NEC and observer SNR.

  4. Actively learning human gaze shifting paths for semantics-aware photo cropping.

    PubMed

    Zhang, Luming; Gao, Yue; Ji, Rongrong; Xia, Yingjie; Dai, Qionghai; Li, Xuelong

    2014-05-01

    Photo cropping is a widely used tool in printing industry, photography, and cinematography. Conventional cropping models suffer from the following three challenges. First, the deemphasized role of semantic contents that are many times more important than low-level features in photo aesthetics. Second, the absence of a sequential ordering in the existing models. In contrast, humans look at semantically important regions sequentially when viewing a photo. Third, the difficulty of leveraging inputs from multiple users. Experience from multiple users is particularly critical in cropping as photo assessment is quite a subjective task. To address these challenges, this paper proposes semantics-aware photo cropping, which crops a photo by simulating the process of humans sequentially perceiving semantically important regions of a photo. We first project the local features (graphlets in this paper) onto the semantic space, which is constructed based on the category information of the training photos. An efficient learning algorithm is then derived to sequentially select semantically representative graphlets of a photo, and the selecting process can be interpreted by a path, which simulates humans actively perceiving semantics in a photo. Furthermore, we learn a prior distribution of such active graphlet paths from training photos that are marked as aesthetically pleasing by multiple users. The learned priors enforce the corresponding active graphlet path of a test photo to be maximally similar to those from the training photos. Experimental results show that: 1) the active graphlet path accurately predicts human gaze shifting, and thus is more indicative for photo aesthetics than conventional saliency maps and 2) the cropped photos produced by our approach outperform its competitors in both qualitative and quantitative comparisons.

  5. Using Natural Language to Enable Mission Managers to Control Multiple Heterogeneous UAVs

    NASA Technical Reports Server (NTRS)

    Trujillo, Anna C.; Puig-Navarro, Javier; Mehdi, S. Bilal; Mcquarry, A. Kyle

    2016-01-01

    The availability of highly capable, yet relatively cheap, unmanned aerial vehicles (UAVs) is opening up new areas of use for hobbyists and for commercial activities. This research is developing methods beyond classical control-stick pilot inputs, to allow operators to manage complex missions without in-depth vehicle expertise. These missions may entail several heterogeneous UAVs flying coordinated patterns or flying multiple trajectories deconflicted in time or space to predefined locations. This paper describes the functionality and preliminary usability measures of an interface that allows an operator to define a mission using speech inputs. With a defined and simple vocabulary, operators can input the vast majority of mission parameters using simple, intuitive voice commands. Although the operator interface is simple, it is based upon autonomous algorithms that allow the mission to proceed with minimal input from the operator. This paper also describes these underlying algorithms that allow an operator to manage several UAVs.

  6. PLEXOS Input Data Generator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    The PLEXOS Input Data Generator (PIDG) is a tool that enables PLEXOS users to better version their data, automate data processing, collaborate in developing inputs, and transfer data between different production cost modeling and other power systems analysis software. PIDG can process data that is in a generalized format from multiple input sources, including CSV files, PostgreSQL databases, and PSS/E .raw files and write it to an Excel file that can be imported into PLEXOS with only limited manual intervention.

  7. Tailored Excitation for Multivariable Stability-Margin Measurement Applied to the X-31A Nonlinear Simulation

    NASA Technical Reports Server (NTRS)

    Bosworth, John T.; Burken, John J.

    1997-01-01

    Safety and productivity of the initial flight test phase of a new vehicle have been enhanced by developing the ability to measure the stability margins of the combined control system and vehicle in flight. One shortcoming of performing this analysis is the long duration of the excitation signal required to provide results over a wide frequency range. For flight regimes such as high angle of attack or hypersonic flight, the ability to maintain flight condition for this time duration is difficult. Significantly reducing the required duration of the excitation input is possible by tailoring the input to excite only the frequency range where the lowest stability margin is expected. For a multiple-input/multiple-output system, the inputs can be simultaneously applied to the control effectors by creating each excitation input with a unique set of frequency components. Chirp-Z transformation algorithms can be used to match the analysis of the results to the specific frequencies used in the excitation input. This report discusses the application of a tailored excitation input to a high-fidelity X-31A linear model and nonlinear simulation. Depending on the frequency range, the results indicate the potential to significantly reduce the time required for stability measurement.

  8. Theoretical lower bounds for parallel pipelined shift-and-add constant multiplications with n-input arithmetic operators

    NASA Astrophysics Data System (ADS)

    Cruz Jiménez, Miriam Guadalupe; Meyer Baese, Uwe; Jovanovic Dolecek, Gordana

    2017-12-01

    New theoretical lower bounds for the number of operators needed in fixed-point constant multiplication blocks are presented. The multipliers are constructed with the shift-and-add approach, where every arithmetic operation is pipelined, and with the generalization that n-input pipelined additions/subtractions are allowed, along with pure pipelining registers. These lower bounds, tighter than the state-of-the-art theoretical limits, are particularly useful in early design stages for a quick assessment in the hardware utilization of low-cost constant multiplication blocks implemented in the newest families of field programmable gate array (FPGA) integrated circuits.

  9. A data-driven multi-model methodology with deep feature selection for short-term wind forecasting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feng, Cong; Cui, Mingjian; Hodge, Bri-Mathias

    With the growing wind penetration into the power system worldwide, improving wind power forecasting accuracy is becoming increasingly important to ensure continued economic and reliable power system operations. In this paper, a data-driven multi-model wind forecasting methodology is developed with a two-layer ensemble machine learning technique. The first layer is composed of multiple machine learning models that generate individual forecasts. A deep feature selection framework is developed to determine the most suitable inputs to the first layer machine learning models. Then, a blending algorithm is applied in the second layer to create an ensemble of the forecasts produced by firstmore » layer models and generate both deterministic and probabilistic forecasts. This two-layer model seeks to utilize the statistically different characteristics of each machine learning algorithm. A number of machine learning algorithms are selected and compared in both layers. This developed multi-model wind forecasting methodology is compared to several benchmarks. The effectiveness of the proposed methodology is evaluated to provide 1-hour-ahead wind speed forecasting at seven locations of the Surface Radiation network. Numerical results show that comparing to the single-algorithm models, the developed multi-model framework with deep feature selection procedure has improved the forecasting accuracy by up to 30%.« less

  10. Machine Learning Classification of Heterogeneous Fields to Estimate Physical Responses

    NASA Astrophysics Data System (ADS)

    McKenna, S. A.; Akhriev, A.; Alzate, C.; Zhuk, S.

    2017-12-01

    The promise of machine learning to enhance physics-based simulation is examined here using the transient pressure response to a pumping well in a heterogeneous aquifer. 10,000 random fields of log10 hydraulic conductivity (K) are created and conditioned on a single K measurement at the pumping well. Each K-field is used as input to a forward simulation of drawdown (pressure decline). The differential equations governing groundwater flow to the well serve as a non-linear transform of the input K-field to an output drawdown field. The results are stored and the data set is split into training and testing sets for classification. A Euclidean distance measure between any two fields is calculated and the resulting distances between all pairs of fields define a similarity matrix. Similarity matrices are calculated for both input K-fields and the resulting drawdown fields at the end of the simulation. The similarity matrices are then used as input to spectral clustering to determine groupings of similar input and output fields. Additionally, the similarity matrix is used as input to multi-dimensional scaling to visualize the clustering of fields in lower dimensional spaces. We examine the ability to cluster both input K-fields and output drawdown fields separately with the goal of identifying K-fields that create similar drawdowns and, conversely, given a set of simulated drawdown fields, identify meaningful clusters of input K-fields. Feature extraction based on statistical parametric mapping provides insight into what features of the fields drive the classification results. The final goal is to successfully classify input K-fields into the correct output class, and also, given an output drawdown field, be able to infer the correct class of input field that created it.

  11. Parameterization of typhoon-induced ocean cooling using temperature equation and machine learning algorithms: an example of typhoon Soulik (2013)

    NASA Astrophysics Data System (ADS)

    Wei, Jun; Jiang, Guo-Qing; Liu, Xin

    2017-09-01

    This study proposed three algorithms that can potentially be used to provide sea surface temperature (SST) conditions for typhoon prediction models. Different from traditional data assimilation approaches, which provide prescribed initial/boundary conditions, our proposed algorithms aim to resolve a flow-dependent SST feedback between growing typhoons and oceans in the future time. Two of these algorithms are based on linear temperature equations (TE-based), and the other is based on an innovative technique involving machine learning (ML-based). The algorithms are then implemented into a Weather Research and Forecasting model for the simulation of typhoon to assess their effectiveness, and the results show significant improvement in simulated storm intensities by including ocean cooling feedback. The TE-based algorithm I considers wind-induced ocean vertical mixing and upwelling processes only, and thus obtained a synoptic and relatively smooth sea surface temperature cooling. The TE-based algorithm II incorporates not only typhoon winds but also ocean information, and thus resolves more cooling features. The ML-based algorithm is based on a neural network, consisting of multiple layers of input variables and neurons, and produces the best estimate of the cooling structure, in terms of its amplitude and position. Sensitivity analysis indicated that the typhoon-induced ocean cooling is a nonlinear process involving interactions of multiple atmospheric and oceanic variables. Therefore, with an appropriate selection of input variables and neuron sizes, the ML-based algorithm appears to be more efficient in prognosing the typhoon-induced ocean cooling and in predicting typhoon intensity than those algorithms based on linear regression methods.

  12. Predicting musically induced emotions from physiological inputs: linear and neural network models.

    PubMed

    Russo, Frank A; Vempala, Naresh N; Sandstrom, Gillian M

    2013-01-01

    Listening to music often leads to physiological responses. Do these physiological responses contain sufficient information to infer emotion induced in the listener? The current study explores this question by attempting to predict judgments of "felt" emotion from physiological responses alone using linear and neural network models. We measured five channels of peripheral physiology from 20 participants-heart rate (HR), respiration, galvanic skin response, and activity in corrugator supercilii and zygomaticus major facial muscles. Using valence and arousal (VA) dimensions, participants rated their felt emotion after listening to each of 12 classical music excerpts. After extracting features from the five channels, we examined their correlation with VA ratings, and then performed multiple linear regression to see if a linear relationship between the physiological responses could account for the ratings. Although linear models predicted a significant amount of variance in arousal ratings, they were unable to do so with valence ratings. We then used a neural network to provide a non-linear account of the ratings. The network was trained on the mean ratings of eight of the 12 excerpts and tested on the remainder. Performance of the neural network confirms that physiological responses alone can be used to predict musically induced emotion. The non-linear model derived from the neural network was more accurate than linear models derived from multiple linear regression, particularly along the valence dimension. A secondary analysis allowed us to quantify the relative contributions of inputs to the non-linear model. The study represents a novel approach to understanding the complex relationship between physiological responses and musically induced emotion.

  13. Comparative evaluation of support vector machine classification for computer aided detection of breast masses in mammography

    NASA Astrophysics Data System (ADS)

    Lesniak, J. M.; Hupse, R.; Blanc, R.; Karssemeijer, N.; Székely, G.

    2012-08-01

    False positive (FP) marks represent an obstacle for effective use of computer-aided detection (CADe) of breast masses in mammography. Typically, the problem can be approached either by developing more discriminative features or by employing different classifier designs. In this paper, the usage of support vector machine (SVM) classification for FP reduction in CADe is investigated, presenting a systematic quantitative evaluation against neural networks, k-nearest neighbor classification, linear discriminant analysis and random forests. A large database of 2516 film mammography examinations and 73 input features was used to train the classifiers and evaluate for their performance on correctly diagnosed exams as well as false negatives. Further, classifier robustness was investigated using varying training data and feature sets as input. The evaluation was based on the mean exam sensitivity in 0.05-1 FPs on normals on the free-response receiver operating characteristic curve (FROC), incorporated into a tenfold cross validation framework. It was found that SVM classification using a Gaussian kernel offered significantly increased detection performance (P = 0.0002) compared to the reference methods. Varying training data and input features, SVMs showed improved exploitation of large feature sets. It is concluded that with the SVM-based CADe a significant reduction of FPs is possible outperforming other state-of-the-art approaches for breast mass CADe.

  14. Robust Real-Time Music Transcription with a Compositional Hierarchical Model.

    PubMed

    Pesek, Matevž; Leonardis, Aleš; Marolt, Matija

    2017-01-01

    The paper presents a new compositional hierarchical model for robust music transcription. Its main features are unsupervised learning of a hierarchical representation of input data, transparency, which enables insights into the learned representation, as well as robustness and speed which make it suitable for real-world and real-time use. The model consists of multiple layers, each composed of a number of parts. The hierarchical nature of the model corresponds well to hierarchical structures in music. The parts in lower layers correspond to low-level concepts (e.g. tone partials), while the parts in higher layers combine lower-level representations into more complex concepts (tones, chords). The layers are learned in an unsupervised manner from music signals. Parts in each layer are compositions of parts from previous layers based on statistical co-occurrences as the driving force of the learning process. In the paper, we present the model's structure and compare it to other hierarchical approaches in the field of music information retrieval. We evaluate the model's performance for the multiple fundamental frequency estimation. Finally, we elaborate on extensions of the model towards other music information retrieval tasks.

  15. Exploring the physical layer frontiers of cellular uplink: The Vienna LTE-A Uplink Simulator.

    PubMed

    Zöchmann, Erich; Schwarz, Stefan; Pratschner, Stefan; Nagel, Lukas; Lerch, Martin; Rupp, Markus

    Communication systems in practice are subject to many technical/technological constraints and restrictions. Multiple input, multiple output (MIMO) processing in current wireless communications, as an example, mostly employs codebook-based pre-coding to save computational complexity at the transmitters and receivers. In such cases, closed form expressions for capacity or bit-error probability are often unattainable; effects of realistic signal processing algorithms on the performance of practical communication systems rather have to be studied in simulation environments. The Vienna LTE-A Uplink Simulator is a 3GPP LTE-A standard compliant MATLAB-based link level simulator that is publicly available under an academic use license, facilitating reproducible evaluations of signal processing algorithms and transceiver designs in wireless communications. This paper reviews research results that have been obtained by means of the Vienna LTE-A Uplink Simulator, highlights the effects of single-carrier frequency-division multiplexing (as the distinguishing feature to LTE-A downlink), extends known link adaptation concepts to uplink transmission, shows the implications of the uplink pilot pattern for gathering channel state information at the receiver and completes with possible future research directions.

  16. Reconstruction of an input function from a dynamic PET water image using multiple tissue curves

    NASA Astrophysics Data System (ADS)

    Kudomi, Nobuyuki; Maeda, Yukito; Yamamoto, Yuka; Nishiyama, Yoshihiro

    2016-08-01

    Quantification of cerebral blood flow (CBF) is important for the understanding of normal and pathologic brain physiology. When CBF is assessed using PET with {{\\text{H}}2} 15O or C15O2, its calculation requires an arterial input function, which generally requires invasive arterial blood sampling. The aim of the present study was to develop a new technique to reconstruct an image derived input function (IDIF) from a dynamic {{\\text{H}}2} 15O PET image as a completely non-invasive approach. Our technique consisted of using a formula to express the input using tissue curve with rate constant parameter. For multiple tissue curves extracted from the dynamic image, the rate constants were estimated so as to minimize the sum of the differences of the reproduced inputs expressed by the extracted tissue curves. The estimated rates were used to express the inputs and the mean of the estimated inputs was used as an IDIF. The method was tested in human subjects (n  =  29) and was compared to the blood sampling method. Simulation studies were performed to examine the magnitude of potential biases in CBF and to optimize the number of multiple tissue curves used for the input reconstruction. In the PET study, the estimated IDIFs were well reproduced against the measured ones. The difference between the calculated CBF values obtained using the two methods was small as around  <8% and the calculated CBF values showed a tight correlation (r  =  0.97). The simulation showed that errors associated with the assumed parameters were  <10%, and that the optimal number of tissue curves to be used was around 500. Our results demonstrate that IDIF can be reconstructed directly from tissue curves obtained through {{\\text{H}}2} 15O PET imaging. This suggests the possibility of using a completely non-invasive technique to assess CBF in patho-physiological studies.

  17. Attention model of binocular rivalry

    PubMed Central

    Rankin, James; Rinzel, John; Carrasco, Marisa; Heeger, David J.

    2017-01-01

    When the corresponding retinal locations in the two eyes are presented with incompatible images, a stable percept gives way to perceptual alternations in which the two images compete for perceptual dominance. As perceptual experience evolves dynamically under constant external inputs, binocular rivalry has been used for studying intrinsic cortical computations and for understanding how the brain regulates competing inputs. Converging behavioral and EEG results have shown that binocular rivalry and attention are intertwined: binocular rivalry ceases when attention is diverted away from the rivalry stimuli. In addition, the competing image in one eye suppresses the target in the other eye through a pattern of gain changes similar to those induced by attention. These results require a revision of the current computational theories of binocular rivalry, in which the role of attention is ignored. Here, we provide a computational model of binocular rivalry. In the model, competition between two images in rivalry is driven by both attentional modulation and mutual inhibition, which have distinct selectivity (feature vs. eye of origin) and dynamics (relatively slow vs. relatively fast). The proposed model explains a wide range of phenomena reported in rivalry, including the three hallmarks: (i) binocular rivalry requires attention; (ii) various perceptual states emerge when the two images are swapped between the eyes multiple times per second; (iii) the dominance duration as a function of input strength follows Levelt’s propositions. With a bifurcation analysis, we identified the parameter space in which the model’s behavior was consistent with experimental results. PMID:28696323

  18. Classification of footwear outsole patterns using Fourier transform and local interest points.

    PubMed

    Richetelli, Nicole; Lee, Mackenzie C; Lasky, Carleen A; Gump, Madison E; Speir, Jacqueline A

    2017-06-01

    Successful classification of questioned footwear has tremendous evidentiary value; the result can minimize the potential suspect pool and link a suspect to a victim, a crime scene, or even multiple crime scenes to each other. With this in mind, several different automated and semi-automated classification models have been applied to the forensic footwear recognition problem, with superior performance commonly associated with two different approaches: correlation of image power (magnitude) or phase, and the use of local interest points transformed using the Scale Invariant Feature Transform (SIFT) and compared using Random Sample Consensus (RANSAC). Despite the distinction associated with each of these methods, all three have not been cross-compared using a single dataset, of limited quality (i.e., characteristic of crime scene-like imagery), and created using a wide combination of image inputs. To address this question, the research presented here examines the classification performance of the Fourier-Mellin transform (FMT), phase-only correlation (POC), and local interest points (transformed using SIFT and compared using RANSAC), as a function of inputs that include mixed media (blood and dust), transfer mechanisms (gel lifters), enhancement techniques (digital and chemical) and variations in print substrate (ceramic tiles, vinyl tiles and paper). Results indicate that POC outperforms both FMT and SIFT+RANSAC, regardless of image input (type, quality and totality), and that the difference in stochastic dominance detected for POC is significant across all image comparison scenarios evaluated in this study. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. Automated Feature Identification and Classification Using Automated Feature Weighted Self Organizing Map (FWSOM)

    NASA Astrophysics Data System (ADS)

    Starkey, Andrew; Usman Ahmad, Aliyu; Hamdoun, Hassan

    2017-10-01

    This paper investigates the application of a novel method for classification called Feature Weighted Self Organizing Map (FWSOM) that analyses the topology information of a converged standard Self Organizing Map (SOM) to automatically guide the selection of important inputs during training for improved classification of data with redundant inputs, examined against two traditional approaches namely neural networks and Support Vector Machines (SVM) for the classification of EEG data as presented in previous work. In particular, the novel method looks to identify the features that are important for classification automatically, and in this way the important features can be used to improve the diagnostic ability of any of the above methods. The paper presents the results and shows how the automated identification of the important features successfully identified the important features in the dataset and how this results in an improvement of the classification results for all methods apart from linear discriminatory methods which cannot separate the underlying nonlinear relationship in the data. The FWSOM in addition to achieving higher classification accuracy has given insights into what features are important in the classification of each class (left and right-hand movements), and these are corroborated by already published work in this area.

  20. The ART of representation: Memory reduction and noise tolerance in a neural network vision system

    NASA Astrophysics Data System (ADS)

    Langley, Christopher S.

    The Feature Cerebellar Model Arithmetic Computer (FCMAC) is a multiple-input-single-output neural network that can provide three-degree-of-freedom (3-DOF) pose estimation for a robotic vision system. The FCMAC provides sufficient accuracy to enable a manipulator to grasp an object from an arbitrary pose within its workspace. The network learns an appearance-based representation of an object by storing coarsely quantized feature patterns. As all unique patterns are encoded, the network size grows uncontrollably. A new architecture is introduced herein, which combines the FCMAC with an Adaptive Resonance Theory (ART) network. The ART module categorizes patterns observed during training into a set of prototypes that are used to build the FCMAC. As a result, the network no longer grows without bound, but constrains itself to a user-specified size. Pose estimates remain accurate since the ART layer tends to discard the least relevant information first. The smaller network performs recall faster, and in some cases is better for generalization, resulting in a reduction of error at recall time. The ART-Under-Constraint (ART-C) algorithm is extended to include initial filling with randomly selected patterns (referred to as ART-F). In experiments using a real-world data set, the new network performed equally well using less than one tenth the number of coarse patterns as a regular FCMAC. The FCMAC is also extended to include real-valued input activations. As a result, the network can be tuned to reject a variety of types of noise in the image feature detection. A quantitative analysis of noise tolerance was performed using four synthetic noise algorithms, and a qualitative investigation was made using noisy real-world image data. In validation experiments, the FCMAC system outperformed Radial Basis Function (RBF) networks for the 3-DOF problem, and had accuracy comparable to that of Principal Component Analysis (PCA) and superior to that of Shape Context Matching (SCM), both of which estimate orientation only.

  1. Characteristics of Innovators Adopting a National Personal Health Record in Portugal: Cross-Sectional Study

    PubMed Central

    Rodolfo, Inês; Pereira, Ana Marta; de Sá, Armando Brito

    2017-01-01

    Background Personal health records (PHRs) are increasingly being deployed worldwide, but their rates of adoption by patients vary widely across countries and health systems. Five main categories of adopters are usually considered when evaluating the diffusion of innovations: innovators, early adopters, early majority, late majority, and laggards. Objective We aimed to evaluate adoption of the Portuguese PHR 3 months after its release, as well as characterize the individuals who registered and used the system during that period (the innovators). Methods We conducted a cross-sectional study. Users and nonusers were defined based on their input, or not, of health-related information into the PHR. Users of the PHR were compared with nonusers regarding demographic and clinical variables. Users were further characterized according to their intensity of information input: single input (one single piece of health-related information recorded) and multiple inputs. Multivariate logistic regression was used to model the probability of being in the multiple inputs group. ArcGis (ESRI, Redlands, CA, USA) was used to create maps of the proportion of PHR registrations by region and district. Results The number of registered individuals was 109,619 (66,408/109,619, 60.58% women; mean age: 44.7 years, standard deviation [SD] 18.1 years). The highest proportion of registrations was observed for those aged between 30 and 39 years (25,810/109,619, 23.55%). Furthermore, 16.88% (18,504/109,619) of registered individuals were considered users and 83.12% (91,115/109,619) nonusers. Among PHR users, 32.18% (5955/18,504) engaged in single input and 67.82% (12,549/18,504) in multiple inputs. Younger individuals and male users had higher odds of engaging in multiple inputs (odds ratio for male individuals 1.32, CI 1.19-1.48). Geographic analysis revealed higher proportions of PHR adoption in urban centers when compared with rural noncoastal districts. Conclusions Approximately 1% of the country’s population registered during the first 3 months of the Portuguese PHR. Registered individuals were more frequently female aged between 30 and 39 years. There is evidence of a geographic gap in the adoption of the Portuguese PHR, with higher proportions of adopters in urban centers than in rural noncoastal districts. PMID:29021125

  2. Numerical Function Generators Using LUT Cascades

    DTIC Science & Technology

    2007-06-01

    either algebraically (for example, sinðxÞ) or as a table of input/ output values. The user defines the numerical function by using the syntax of Scilab ...defined function in Scilab or specify it directly. Note that, by changing the parser of our system, any format can be used for the design entry. First...Methods for Multiple-Valued Input Address Generators,” Proc. 36th IEEE Int’l Symp. Multiple-Valued Logic (ISMVL ’06), May 2006. [29] Scilab 3.0, INRIA-ENPC

  3. Modeling, Simulation and Performance Analysis of Multiple-Input Multiple-Output (MIMO) Systems with Multicarrier Time Delay Diversity Modulation

    DTIC Science & Technology

    2005-09-01

    6. AUTHOR( S ) Muhammad Shahid 5. FUNDING NUMBERS 7. PERFORMING ORGANIZATION NAME( S ) AND ADDRESS(ES) Naval Postgraduate School Monterey, CA 93943...5000 8. PERFORMING ORGANIZATION REPORT NUMBER 9. SPONSORING /MONITORING AGENCY NAME( S ) AND ADDRESS(ES) N/A 10. SPONSORING/MONITORING...streams which are assigned to the K subcarriers [1]. The symbol duration of the input serial data is ’sT with serial data rate of ’ 𔃻s s f T

  4. Evaluation of Spectral and Prosodic Features of Speech Affected by Orthodontic Appliances Using the Gmm Classifier

    NASA Astrophysics Data System (ADS)

    Přibil, Jiří; Přibilová, Anna; Ďuračkoá, Daniela

    2014-01-01

    The paper describes our experiment with using the Gaussian mixture models (GMM) for classification of speech uttered by a person wearing orthodontic appliances. For the GMM classification, the input feature vectors comprise the basic and the complementary spectral properties as well as the supra-segmental parameters. Dependence of classification correctness on the number of the parameters in the input feature vector and on the computation complexity is also evaluated. In addition, an influence of the initial setting of the parameters for GMM training process was analyzed. Obtained recognition results are compared visually in the form of graphs as well as numerically in the form of tables and confusion matrices for tested sentences uttered using three configurations of orthodontic appliances.

  5. A novel comparator featured with input data characteristic

    NASA Astrophysics Data System (ADS)

    Jiang, Xiaobo; Ye, Desheng; Xu, Xiangmin; Zheng, Shuai

    2016-03-01

    Two types of low-power asynchronous comparators featured with input data statistical characteristic are proposed in this article. The asynchronous ripple comparator stops comparing at the first unequal bit but delivers the result to the least significant bit. The pre-stop asynchronous comparator can completely stop comparing and obtain results immediately. The proposed and contrastive comparators were implemented in SMIC 0.18 μm process with different bit widths. Simulation shows that the proposed pre-stop asynchronous comparator features the lowest power consumption, shortest average propagation delay and highest area efficiency among the comparators. Data path of low-density parity check decoder using the proposed pre-stop asynchronous comparators are most power efficient compared with other data paths with synthesised, clock gating and bitwise competition logic comparators.

  6. Symbolic PathFinder: Symbolic Execution of Java Bytecode

    NASA Technical Reports Server (NTRS)

    Pasareanu, Corina S.; Rungta, Neha

    2010-01-01

    Symbolic Pathfinder (SPF) combines symbolic execution with model checking and constraint solving for automated test case generation and error detection in Java programs with unspecified inputs. In this tool, programs are executed on symbolic inputs representing multiple concrete inputs. Values of variables are represented as constraints generated from the analysis of Java bytecode. The constraints are solved using off-the shelf solvers to generate test inputs guaranteed to achieve complex coverage criteria. SPF has been used successfully at NASA, in academia, and in industry.

  7. Petri net modelling of buffers in automated manufacturing systems.

    PubMed

    Zhou, M; Dicesare, F

    1996-01-01

    This paper presents Petri net models of buffers and a methodology by which buffers can be included in a system without introducing deadlocks or overflows. The context is automated manufacturing. The buffers and models are classified as random order or order preserved (first-in-first-out or last-in-first-out), single-input-single-output or multiple-input-multiple-output, part type and/or space distinguishable or indistinguishable, and bounded or safe. Theoretical results for the development of Petri net models which include buffer modules are developed. This theory provides the conditions under which the system properties of boundedness, liveness, and reversibility are preserved. The results are illustrated through two manufacturing system examples: a multiple machine and multiple buffer production line and an automatic storage and retrieval system in the context of flexible manufacturing.

  8. An accelerated training method for back propagation networks

    NASA Technical Reports Server (NTRS)

    Shelton, Robert O. (Inventor)

    1993-01-01

    The principal objective is to provide a training procedure for a feed forward, back propagation neural network which greatly accelerates the training process. A set of orthogonal singular vectors are determined from the input matrix such that the standard deviations of the projections of the input vectors along these singular vectors, as a set, are substantially maximized, thus providing an optimal means of presenting the input data. Novelty exists in the method of extracting from the set of input data, a set of features which can serve to represent the input data in a simplified manner, thus greatly reducing the time/expense to training the system.

  9. Activity Recognition in Egocentric video using SVM, kNN and Combined SVMkNN Classifiers

    NASA Astrophysics Data System (ADS)

    Sanal Kumar, K. P.; Bhavani, R., Dr.

    2017-08-01

    Egocentric vision is a unique perspective in computer vision which is human centric. The recognition of egocentric actions is a challenging task which helps in assisting elderly people, disabled patients and so on. In this work, life logging activity videos are taken as input. There are 2 categories, first one is the top level and second one is second level. Here, the recognition is done using the features like Histogram of Oriented Gradients (HOG), Motion Boundary Histogram (MBH) and Trajectory. The features are fused together and it acts as a single feature. The extracted features are reduced using Principal Component Analysis (PCA). The features that are reduced are provided as input to the classifiers like Support Vector Machine (SVM), k nearest neighbor (kNN) and combined Support Vector Machine (SVM) and k Nearest Neighbor (kNN) (combined SVMkNN). These classifiers are evaluated and the combined SVMkNN provided better results than other classifiers in the literature.

  10. GeneSilico protein structure prediction meta-server.

    PubMed

    Kurowski, Michal A; Bujnicki, Janusz M

    2003-07-01

    Rigorous assessments of protein structure prediction have demonstrated that fold recognition methods can identify remote similarities between proteins when standard sequence search methods fail. It has been shown that the accuracy of predictions is improved when refined multiple sequence alignments are used instead of single sequences and if different methods are combined to generate a consensus model. There are several meta-servers available that integrate protein structure predictions performed by various methods, but they do not allow for submission of user-defined multiple sequence alignments and they seldom offer confidentiality of the results. We developed a novel WWW gateway for protein structure prediction, which combines the useful features of other meta-servers available, but with much greater flexibility of the input. The user may submit an amino acid sequence or a multiple sequence alignment to a set of methods for primary, secondary and tertiary structure prediction. Fold-recognition results (target-template alignments) are converted into full-atom 3D models and the quality of these models is uniformly assessed. A consensus between different FR methods is also inferred. The results are conveniently presented on-line on a single web page over a secure, password-protected connection. The GeneSilico protein structure prediction meta-server is freely available for academic users at http://genesilico.pl/meta.

  11. GeneSilico protein structure prediction meta-server

    PubMed Central

    Kurowski, Michal A.; Bujnicki, Janusz M.

    2003-01-01

    Rigorous assessments of protein structure prediction have demonstrated that fold recognition methods can identify remote similarities between proteins when standard sequence search methods fail. It has been shown that the accuracy of predictions is improved when refined multiple sequence alignments are used instead of single sequences and if different methods are combined to generate a consensus model. There are several meta-servers available that integrate protein structure predictions performed by various methods, but they do not allow for submission of user-defined multiple sequence alignments and they seldom offer confidentiality of the results. We developed a novel WWW gateway for protein structure prediction, which combines the useful features of other meta-servers available, but with much greater flexibility of the input. The user may submit an amino acid sequence or a multiple sequence alignment to a set of methods for primary, secondary and tertiary structure prediction. Fold-recognition results (target-template alignments) are converted into full-atom 3D models and the quality of these models is uniformly assessed. A consensus between different FR methods is also inferred. The results are conveniently presented on-line on a single web page over a secure, password-protected connection. The GeneSilico protein structure prediction meta-server is freely available for academic users at http://genesilico.pl/meta. PMID:12824313

  12. Noise-immune multisensor transduction of speech

    NASA Astrophysics Data System (ADS)

    Viswanathan, Vishu R.; Henry, Claudia M.; Derr, Alan G.; Roucos, Salim; Schwartz, Richard M.

    1986-08-01

    Two types of configurations of multiple sensors were developed, tested and evaluated in speech recognition application for robust performance in high levels of acoustic background noise: One type combines the individual sensor signals to provide a single speech signal input, and the other provides several parallel inputs. For single-input systems, several configurations of multiple sensors were developed and tested. Results from formal speech intelligibility and quality tests in simulated fighter aircraft cockpit noise show that each of the two-sensor configurations tested outperforms the constituent individual sensors in high noise. Also presented are results comparing the performance of two-sensor configurations and individual sensors in speaker-dependent, isolated-word speech recognition tests performed using a commercial recognizer (Verbex 4000) in simulated fighter aircraft cockpit noise.

  13. Using input command pre-shaping to suppress multiple mode vibration

    NASA Technical Reports Server (NTRS)

    Hyde, James M.; Seering, Warren P.

    1990-01-01

    Spacecraft, space-borne robotic systems, and manufacturing equipment often utilize lightweight materials and configurations that give rise to vibration problems. Prior research has led to the development of input command pre-shapers that can significantly reduce residual vibration. These shapers exhibit marked insensitivity to errors in natural frequency estimates and can be combined to minimize vibration at more than one frequency. This paper presents a method for the development of multiple mode input shapers which are simpler to implement than previous designs and produce smaller system response delays. The new technique involves the solution of a group of simultaneous non-linear impulse constraint equations. The resulting shapers were tested on a model of MACE, an MIT/NASA experimental flexible structure.

  14. Systems and methods for improved telepresence

    DOEpatents

    Anderson, Matthew O.; Willis, W. David; Kinoshita, Robert A.

    2005-10-25

    The present invention provides a modular, flexible system for deploying multiple video perception technologies. The telepresence system of the present invention is capable of allowing an operator to control multiple mono and stereo video inputs in a hands-free manner. The raw data generated by the input devices is processed into a common zone structure that corresponds to the commands of the user, and the commands represented by the zone structure are transmitted to the appropriate device. This modularized approach permits input devices to be easily interfaced with various telepresence devices. Additionally, new input devices and telepresence devices are easily added to the system and are frequently interchangeable. The present invention also provides a modular configuration component that allows an operator to define a plurality of views each of which defines the telepresence devices to be controlled by a particular input device. The present invention provides a modular flexible system for providing telepresence for a wide range of applications. The modularization of the software components combined with the generalized zone concept allows the systems and methods of the present invention to be easily expanded to encompass new devices and new uses.

  15. Analytical modeling of operating characteristics of premixing-prevaporizing fuel-air mixing passages. Volume 2: User's manual

    NASA Technical Reports Server (NTRS)

    Anderson, O. L.; Chiappetta, L. M.; Edwards, D. E.; Mcvey, J. B.

    1982-01-01

    A user's manual describing the operation of three computer codes (ADD code, PTRAK code, and VAPDIF code) is presented. The general features of the computer codes, the input/output formats, run streams, and sample input cases are described.

  16. An Investigation of the Application of Artificial Neural Networks to Adaptive Optics Imaging Systems

    DTIC Science & Technology

    1991-12-01

    neural network and the feedforward neural network studied is the single layer perceptron artificial neural network . The recurrent artificial neural network input...features are the wavefront sensor slope outputs and neighboring actuator feedback commands. The feedforward artificial neural network input

  17. Learning feature representations with a cost-relevant sparse autoencoder.

    PubMed

    Längkvist, Martin; Loutfi, Amy

    2015-02-01

    There is an increasing interest in the machine learning community to automatically learn feature representations directly from the (unlabeled) data instead of using hand-designed features. The autoencoder is one method that can be used for this purpose. However, for data sets with a high degree of noise, a large amount of the representational capacity in the autoencoder is used to minimize the reconstruction error for these noisy inputs. This paper proposes a method that improves the feature learning process by focusing on the task relevant information in the data. This selective attention is achieved by weighting the reconstruction error and reducing the influence of noisy inputs during the learning process. The proposed model is trained on a number of publicly available image data sets and the test error rate is compared to a standard sparse autoencoder and other methods, such as the denoising autoencoder and contractive autoencoder.

  18. Features Extraction of Flotation Froth Images and BP Neural Network Soft-Sensor Model of Concentrate Grade Optimized by Shuffled Cuckoo Searching Algorithm

    PubMed Central

    Wang, Jie-sheng; Han, Shuang; Shen, Na-na; Li, Shu-xia

    2014-01-01

    For meeting the forecasting target of key technology indicators in the flotation process, a BP neural network soft-sensor model based on features extraction of flotation froth images and optimized by shuffled cuckoo search algorithm is proposed. Based on the digital image processing technique, the color features in HSI color space, the visual features based on the gray level cooccurrence matrix, and the shape characteristics based on the geometric theory of flotation froth images are extracted, respectively, as the input variables of the proposed soft-sensor model. Then the isometric mapping method is used to reduce the input dimension, the network size, and learning time of BP neural network. Finally, a shuffled cuckoo search algorithm is adopted to optimize the BP neural network soft-sensor model. Simulation results show that the model has better generalization results and prediction accuracy. PMID:25133210

  19. A Bayesian Sampler for Optimization of Protein Domain Hierarchies

    PubMed Central

    2014-01-01

    Abstract The process of identifying and modeling functionally divergent subgroups for a specific protein domain class and arranging these subgroups hierarchically has, thus far, largely been done via manual curation. How to accomplish this automatically and optimally is an unsolved statistical and algorithmic problem that is addressed here via Markov chain Monte Carlo sampling. Taking as input a (typically very large) multiple-sequence alignment, the sampler creates and optimizes a hierarchy by adding and deleting leaf nodes, by moving nodes and subtrees up and down the hierarchy, by inserting or deleting internal nodes, and by redefining the sequences and conserved patterns associated with each node. All such operations are based on a probability distribution that models the conserved and divergent patterns defining each subgroup. When we view these patterns as sequence determinants of protein function, each node or subtree in such a hierarchy corresponds to a subgroup of sequences with similar biological properties. The sampler can be applied either de novo or to an existing hierarchy. When applied to 60 protein domains from multiple starting points in this way, it converged on similar solutions with nearly identical log-likelihood ratio scores, suggesting that it typically finds the optimal peak in the posterior probability distribution. Similarities and differences between independently generated, nearly optimal hierarchies for a given domain help distinguish robust from statistically uncertain features. Thus, a future application of the sampler is to provide confidence measures for various features of a domain hierarchy. PMID:24494927

  20. A new method for generating an invariant iris private key based on the fuzzy vault system.

    PubMed

    Lee, Youn Joo; Park, Kang Ryoung; Lee, Sung Joo; Bae, Kwanghyuk; Kim, Jaihie

    2008-10-01

    Cryptographic systems have been widely used in many information security applications. One main challenge that these systems have faced has been how to protect private keys from attackers. Recently, biometric cryptosystems have been introduced as a reliable way of concealing private keys by using biometric data. A fuzzy vault refers to a biometric cryptosystem that can be used to effectively protect private keys and to release them only when legitimate users enter their biometric data. In biometric systems, a critical problem is storing biometric templates in a database. However, fuzzy vault systems do not need to directly store these templates since they are combined with private keys by using cryptography. Previous fuzzy vault systems were designed by using fingerprint, face, and so on. However, there has been no attempt to implement a fuzzy vault system that used an iris. In biometric applications, it is widely known that an iris can discriminate between persons better than other biometric modalities. In this paper, we propose a reliable fuzzy vault system based on local iris features. We extracted multiple iris features from multiple local regions in a given iris image, and the exact values of the unordered set were then produced using the clustering method. To align the iris templates with the new input iris data, a shift-matching technique was applied. Experimental results showed that 128-bit private keys were securely and robustly generated by using any given iris data without requiring prealignment.

  1. Resource constrained flux balance analysis predicts selective pressure on the global structure of metabolic networks.

    PubMed

    Abedpour, Nima; Kollmann, Markus

    2015-11-23

    A universal feature of metabolic networks is their hourglass or bow-tie structure on cellular level. This architecture reflects the conversion of multiple input nutrients into multiple biomass components via a small set of precursor metabolites. However, it is yet unclear to what extent this structural feature is the result of natural selection. We extend flux balance analysis to account for limited cellular resources. Using this model, optimal structure of metabolic networks can be calculated for different environmental conditions. We observe a significant structural reshaping of metabolic networks for a toy-network and E. coli core metabolism if we increase the share of invested resources for switching between different nutrient conditions. Here, hub nodes emerge and the optimal network structure becomes bow-tie-like as a consequence of limited cellular resource constraint. We confirm this theoretical finding by comparing the reconstructed metabolic networks of bacterial species with respect to their lifestyle. We show that bow-tie structure can give a system-level fitness advantage to organisms that live in highly competitive and fluctuating environments. Here, limitation of cellular resources can lead to an efficiency-flexibility tradeoff where it pays off for the organism to shorten catabolic pathways if they are frequently activated and deactivated. As a consequence, generalists that shuttle between diverse environmental conditions should have a more predominant bow-tie structure than specialists that visit just a few isomorphic habitats during their life cycle.

  2. DiffPy-CMI-Python libraries for Complex Modeling Initiative

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Billinge, Simon; Juhas, Pavol; Farrow, Christopher

    2014-02-01

    Software to manipulate and describe crystal and molecular structures and set up structural refinements from multiple experimental inputs. Calculation and simulation of structure derived physical quantities. Library for creating customized refinements of atomic structures from available experimental and theoretical inputs.

  3. Feature extraction from high resolution satellite imagery as an input to the development and rapid update of a METRANS geographic information system (GIS).

    DOT National Transportation Integrated Search

    2011-06-01

    This report describes an accuracy assessment of extracted features derived from three : subsets of Quickbird pan-sharpened high resolution satellite image for the area of the : Port of Los Angeles, CA. Visual Learning Systems Feature Analyst and D...

  4. Rotation invariant features for wear particle classification

    NASA Astrophysics Data System (ADS)

    Arof, Hamzah; Deravi, Farzin

    1997-09-01

    This paper investigates the ability of a set of rotation invariant features to classify images of wear particles found in used lubricating oil of machinery. The rotation invariant attribute of the features is derived from the property of the magnitudes of Fourier transform coefficients that do not change with spatial shift of the input elements. By analyzing individual circular neighborhoods centered at every pixel in an image, local and global texture characteristics of an image can be described. A number of input sequences are formed by the intensities of pixels on concentric rings of various radii measured from the center of each neighborhood. Fourier transforming the sequences would generate coefficients whose magnitudes are invariant to rotation. Rotation invariant features extracted from these coefficients were utilized to classify wear particle images that were obtained from a number of different particles captured at different orientations. In an experiment involving images of 6 classes, the circular neighborhood features obtained a 91% recognition rate which compares favorably to a 76% rate achieved by features of a 6 by 6 co-occurrence matrix.

  5. Two graphical user interfaces for managing and analyzing MODFLOW groundwater-model scenarios

    USGS Publications Warehouse

    Banta, Edward R.

    2014-01-01

    Scenario Manager and Scenario Analyzer are graphical user interfaces that facilitate the use of calibrated, MODFLOW-based groundwater models for investigating possible responses to proposed stresses on a groundwater system. Scenario Manager allows a user, starting with a calibrated model, to design and run model scenarios by adding or modifying stresses simulated by the model. Scenario Analyzer facilitates the process of extracting data from model output and preparing such display elements as maps, charts, and tables. Both programs are designed for users who are familiar with the science on which groundwater modeling is based but who may not have a groundwater modeler’s expertise in building and calibrating a groundwater model from start to finish. With Scenario Manager, the user can manipulate model input to simulate withdrawal or injection wells, time-variant specified hydraulic heads, recharge, and such surface-water features as rivers and canals. Input for stresses to be simulated comes from user-provided geographic information system files and time-series data files. A Scenario Manager project can contain multiple scenarios and is self-documenting. Scenario Analyzer can be used to analyze output from any MODFLOW-based model; it is not limited to use with scenarios generated by Scenario Manager. Model-simulated values of hydraulic head, drawdown, solute concentration, and cell-by-cell flow rates can be presented in display elements. Map data can be represented as lines of equal value (contours) or as a gradated color fill. Charts and tables display time-series data obtained from output generated by a transient-state model run or from user-provided text files of time-series data. A display element can be based entirely on output of a single model run, or, to facilitate comparison of results of multiple scenarios, an element can be based on output from multiple model runs. Scenario Analyzer can export display elements and supporting metadata as a Portable Document Format file.

  6. A new method of building footprints detection using airborne laser scanning data and multispectral image

    NASA Astrophysics Data System (ADS)

    Luo, Yiping; Jiang, Ting; Gao, Shengli; Wang, Xin

    2010-10-01

    It presents a new approach for detecting building footprints in a combination of registered aerial image with multispectral bands and airborne laser scanning data synchronously obtained by Leica-Geosystems ALS40 and Applanix DACS-301 on the same platform. A two-step method for building detection was presented consisting of selecting 'building' candidate points and then classifying candidate points. A digital surface model(DSM) derived from last pulse laser scanning data was first filtered and the laser points were classified into classes 'ground' and 'building or tree' based on mathematic morphological filter. Then, 'ground' points were resample into digital elevation model(DEM), and a Normalized DSM(nDSM) was generated from DEM and DSM. The candidate points were selected from 'building or tree' points by height value and area threshold in nDSM. The candidate points were further classified into building points and tree points by using the support vector machines(SVM) classification method. Two classification tests were carried out using features only from laser scanning data and associated features from two input data sources. The features included height, height finite difference, RGB bands value, and so on. The RGB value of points was acquired by matching laser scanning data and image using collinear equation. The features of training points were presented as input data for SVM classification method, and cross validation was used to select best classification parameters. The determinant function could be constructed by the classification parameters and the class of candidate points was determined by determinant function. The result showed that associated features from two input data sources were superior to features only from laser scanning data. The accuracy of more than 90% was achieved for buildings in first kind of features.

  7. The Comparison of Visual Working Memory Representations with Perceptual Inputs

    PubMed Central

    Hyun, Joo-seok; Woodman, Geoffrey F.; Vogel, Edward K.; Hollingworth, Andrew

    2008-01-01

    The human visual system can notice differences between memories of previous visual inputs and perceptions of new visual inputs, but the comparison process that detects these differences has not been well characterized. This study tests the hypothesis that differences between the memory of a stimulus array and the perception of a new array are detected in a manner that is analogous to the detection of simple features in visual search tasks. That is, just as the presence of a task-relevant feature in visual search can be detected in parallel, triggering a rapid shift of attention to the object containing the feature, the presence of a memory-percept difference along a task-relevant dimension can be detected in parallel, triggering a rapid shift of attention to the changed object. Supporting evidence was obtained in a series of experiments that examined manual reaction times, saccadic reaction times, and event-related potential latencies. However, these experiments also demonstrated that a slow, limited-capacity process must occur before the observer can make a manual change-detection response. PMID:19653755

  8. Contribution of sublinear and supralinear dendritic integration to neuronal computations

    PubMed Central

    Tran-Van-Minh, Alexandra; Cazé, Romain D.; Abrahamsson, Therése; Cathala, Laurence; Gutkin, Boris S.; DiGregorio, David A.

    2015-01-01

    Nonlinear dendritic integration is thought to increase the computational ability of neurons. Most studies focus on how supralinear summation of excitatory synaptic responses arising from clustered inputs within single dendrites result in the enhancement of neuronal firing, enabling simple computations such as feature detection. Recent reports have shown that sublinear summation is also a prominent dendritic operation, extending the range of subthreshold input-output (sI/O) transformations conferred by dendrites. Like supralinear operations, sublinear dendritic operations also increase the repertoire of neuronal computations, but feature extraction requires different synaptic connectivity strategies for each of these operations. In this article we will review the experimental and theoretical findings describing the biophysical determinants of the three primary classes of dendritic operations: linear, sublinear, and supralinear. We then review a Boolean algebra-based analysis of simplified neuron models, which provides insight into how dendritic operations influence neuronal computations. We highlight how neuronal computations are critically dependent on the interplay of dendritic properties (morphology and voltage-gated channel expression), spiking threshold and distribution of synaptic inputs carrying particular sensory features. Finally, we describe how global (scattered) and local (clustered) integration strategies permit the implementation of similar classes of computations, one example being the object feature binding problem. PMID:25852470

  9. Neural net target-tracking system using structured laser patterns

    NASA Astrophysics Data System (ADS)

    Cho, Jae-Wan; Lee, Yong-Bum; Lee, Nam-Ho; Park, Soon-Yong; Lee, Jongmin; Choi, Gapchu; Baek, Sunghyun; Park, Dong-Sun

    1996-06-01

    In this paper, we describe a robot endeffector tracking system using sensory information from recently-announced structured pattern laser diodes, which can generate images with several different types of structured pattern. The neural network approach is employed to recognize the robot endeffector covering the situation of three types of motion: translation, scaling and rotation. Features for the neural network to detect the position of the endeffector are extracted from the preprocessed images. Artificial neural networks are used to store models and to match with unknown input features recognizing the position of the robot endeffector. Since a minimal number of samples are used for different directions of the robot endeffector in the system, an artificial neural network with the generalization capability can be utilized for unknown input features. A feedforward neural network with the generalization capability can be utilized for unknown input features. A feedforward neural network trained with the back propagation learning is used to detect the position of the robot endeffector. Another feedforward neural network module is used to estimate the motion from a sequence of images and to control movements of the robot endeffector. COmbining the tow neural networks for recognizing the robot endeffector and estimating the motion with the preprocessing stage, the whole system keeps tracking of the robot endeffector effectively.

  10. Impact of nonzero boresight pointing error on ergodic capacity of MIMO FSO communication systems.

    PubMed

    Boluda-Ruiz, Rubén; García-Zambrana, Antonio; Castillo-Vázquez, Beatriz; Castillo-Vázquez, Carmen

    2016-02-22

    A thorough investigation of the impact of nonzero boresight pointing errors on the ergodic capacity of multiple-input/multiple-output (MIMO) free-space optical (FSO) systems with equal gain combining (EGC) reception under different turbulence models, which are modeled as statistically independent, but not necessarily identically distributed (i.n.i.d.) is addressed in this paper. Novel closed-form asymptotic expressions at high signal-to-noise ratio (SNR) for the ergodic capacity of MIMO FSO systems are derived when different geometric arrangements of the receive apertures at the receiver are considered in order to reduce the effect of nonzero inherent boresight displacement, which is inevitably present when more than one receive aperture is considered. As a result, the asymptotic ergodic capacity of MIMO FSO systems is evaluated over log-normal (LN), gamma-gamma (GG) and exponentiated Weibull (EW) atmospheric turbulence in order to study different turbulence conditions, different sizes of receive apertures as well as different aperture averaging conditions. It is concluded that the use of single-input/multiple-output (SIMO) and MIMO techniques can significantly increase the ergodic capacity respect to the direct path link when the inherent boresight displacement takes small values, i.e. when the spacing among receive apertures is not too big. The effect of nonzero additional boresight errors, which is due to the thermal expansion of the building, is evaluated in multiple-input/single-output (MISO) and single-input/single-output (SISO) FSO systems. Simulation results are further included to confirm the analytical results.

  11. Event-related potentials reveal the relations between feature representations at different levels of abstraction.

    PubMed

    Hannah, Samuel D; Shedden, Judith M; Brooks, Lee R; Grundy, John G

    2016-11-01

    In this paper, we use behavioural methods and event-related potentials (ERPs) to explore the relations between informational and instantiated features, as well as the relation between feature abstraction and rule type. Participants are trained to categorize two species of fictitious animals and then identify perceptually novel exemplars. Critically, two groups are given a perfectly predictive counting rule that, according to Hannah and Brooks (2009. Featuring familiarity: How a familiar feature instantiation influences categorization. Canadian Journal of Experimental Psychology/Revue Canadienne de Psychologie Expérimentale, 63, 263-275. Retrieved from http://doi.org/10.1037/a0017919), should orient them to using abstract informational features when categorizing the novel transfer items. A third group is taught a feature list rule, which should orient them to using detailed instantiated features. One counting-rule group were taught their rule before any exposure to the actual stimuli, and the other immediately after training, having learned the instantiations first. The feature-list group were also taught their rule after training. The ERP results suggest that at test, the two counting-rule groups processed items differently, despite their identical rule. This not only supports the distinction that informational and instantiated features are qualitatively different feature representations, but also implies that rules can readily operate over concrete inputs, in contradiction to traditional approaches that assume that rules necessarily act on abstract inputs.

  12. Efficient feature selection using a hybrid algorithm for the task of epileptic seizure detection

    NASA Astrophysics Data System (ADS)

    Lai, Kee Huong; Zainuddin, Zarita; Ong, Pauline

    2014-07-01

    Feature selection is a very important aspect in the field of machine learning. It entails the search of an optimal subset from a very large data set with high dimensional feature space. Apart from eliminating redundant features and reducing computational cost, a good selection of feature also leads to higher prediction and classification accuracy. In this paper, an efficient feature selection technique is introduced in the task of epileptic seizure detection. The raw data are electroencephalography (EEG) signals. Using discrete wavelet transform, the biomedical signals were decomposed into several sets of wavelet coefficients. To reduce the dimension of these wavelet coefficients, a feature selection method that combines the strength of both filter and wrapper methods is proposed. Principal component analysis (PCA) is used as part of the filter method. As for wrapper method, the evolutionary harmony search (HS) algorithm is employed. This metaheuristic method aims at finding the best discriminating set of features from the original data. The obtained features were then used as input for an automated classifier, namely wavelet neural networks (WNNs). The WNNs model was trained to perform a binary classification task, that is, to determine whether a given EEG signal was normal or epileptic. For comparison purposes, different sets of features were also used as input. Simulation results showed that the WNNs that used the features chosen by the hybrid algorithm achieved the highest overall classification accuracy.

  13. A general prediction model for the detection of ADHD and Autism using structural and functional MRI.

    PubMed

    Sen, Bhaskar; Borle, Neil C; Greiner, Russell; Brown, Matthew R G

    2018-01-01

    This work presents a novel method for learning a model that can diagnose Attention Deficit Hyperactivity Disorder (ADHD), as well as Autism, using structural texture and functional connectivity features obtained from 3-dimensional structural magnetic resonance imaging (MRI) and 4-dimensional resting-state functional magnetic resonance imaging (fMRI) scans of subjects. We explore a series of three learners: (1) The LeFMS learner first extracts features from the structural MRI images using the texture-based filters produced by a sparse autoencoder. These filters are then convolved with the original MRI image using an unsupervised convolutional network. The resulting features are used as input to a linear support vector machine (SVM) classifier. (2) The LeFMF learner produces a diagnostic model by first computing spatial non-stationary independent components of the fMRI scans, which it uses to decompose each subject's fMRI scan into the time courses of these common spatial components. These features can then be used with a learner by themselves or in combination with other features to produce the model. Regardless of which approach is used, the final set of features are input to a linear support vector machine (SVM) classifier. (3) Finally, the overall LeFMSF learner uses the combined features obtained from the two feature extraction processes in (1) and (2) above as input to an SVM classifier, achieving an accuracy of 0.673 on the ADHD-200 holdout data and 0.643 on the ABIDE holdout data. Both of these results, obtained with the same LeFMSF framework, are the best known, over all hold-out accuracies on these datasets when only using imaging data-exceeding previously-published results by 0.012 for ADHD and 0.042 for Autism. Our results show that combining multi-modal features can yield good classification accuracy for diagnosis of ADHD and Autism, which is an important step towards computer-aided diagnosis of these psychiatric diseases and perhaps others as well.

  14. Multiple response optimization for higher dimensions in factors and responses

    DOE PAGES

    Lu, Lu; Chapman, Jessica L.; Anderson-Cook, Christine M.

    2016-07-19

    When optimizing a product or process with multiple responses, a two-stage Pareto front approach is a useful strategy to evaluate and balance trade-offs between different estimated responses to seek optimum input locations for achieving the best outcomes. After objectively eliminating non-contenders in the first stage by looking for a Pareto front of superior solutions, graphical tools can be used to identify a final solution in the second subjective stage to compare options and match with user priorities. Until now, there have been limitations on the number of response variables and input factors that could effectively be visualized with existing graphicalmore » summaries. We present novel graphical tools that can be more easily scaled to higher dimensions, in both the input and response spaces, to facilitate informed decision making when simultaneously optimizing multiple responses. A key aspect of these graphics is that the potential solutions can be flexibly sorted to investigate specific queries, and that multiple aspects of the solutions can be simultaneously considered. As a result, recommendations are made about how to evaluate the impact of the uncertainty associated with the estimated response surfaces on decision making with higher dimensions.« less

  15. Mof-Tree: A Spatial Access Method To Manipulate Multiple Overlapping Features.

    ERIC Educational Resources Information Center

    Manolopoulos, Yannis; Nardelli, Enrico; Papadopoulos, Apostolos; Proietti, Guido

    1997-01-01

    Investigates the manipulation of large sets of two-dimensional data representing multiple overlapping features, and presents a new access method, the MOF-tree. Analyzes storage requirements and time with respect to window query operations involving multiple features. Examines both the pointer-based and pointerless MOF-tree representations.…

  16. Boolean Modeling of Neural Systems with Point-Process Inputs and Outputs. Part I: Theory and Simulations

    PubMed Central

    Marmarelis, Vasilis Z.; Zanos, Theodoros P.; Berger, Theodore W.

    2010-01-01

    This paper presents a new modeling approach for neural systems with point-process (spike) inputs and outputs that utilizes Boolean operators (i.e. modulo 2 multiplication and addition that correspond to the logical AND and OR operations respectively, as well as the AND_NOT logical operation representing inhibitory effects). The form of the employed mathematical models is akin to a “Boolean-Volterra” model that contains the product terms of all relevant input lags in a hierarchical order, where terms of order higher than first represent nonlinear interactions among the various lagged values of each input point-process or among lagged values of various inputs (if multiple inputs exist) as they reflect on the output. The coefficients of this Boolean-Volterra model are also binary variables that indicate the presence or absence of the respective term in each specific model/system. Simulations are used to explore the properties of such models and the feasibility of their accurate estimation from short data-records in the presence of noise (i.e. spurious spikes). The results demonstrate the feasibility of obtaining reliable estimates of such models, with excitatory and inhibitory terms, in the presence of considerable noise (spurious spikes) in the outputs and/or the inputs in a computationally efficient manner. A pilot application of this approach to an actual neural system is presented in the companion paper (Part II). PMID:19517238

  17. Reconstruction of input functions from a dynamic PET image with sequential administration of 15O2 and [Formula: see text] for noninvasive and ultra-rapid measurement of CBF, OEF, and CMRO2.

    PubMed

    Kudomi, Nobuyuki; Maeda, Yukito; Yamamoto, Hiroyuki; Yamamoto, Yuka; Hatakeyama, Tetsuhiro; Nishiyama, Yoshihiro

    2018-05-01

    CBF, OEF, and CMRO 2 images can be quantitatively assessed using PET. Their image calculation requires arterial input functions, which require invasive procedure. The aim of the present study was to develop a non-invasive approach with image-derived input functions (IDIFs) using an image from an ultra-rapid O 2 and C 15 O 2 protocol. Our technique consists of using a formula to express the input using tissue curve with rate constants. For multiple tissue curves, the rate constants were estimated so as to minimize the differences of the inputs using the multiple tissue curves. The estimated rates were used to express the inputs and the mean of the estimated inputs was used as an IDIF. The method was tested in human subjects ( n = 24). The estimated IDIFs were well-reproduced against the measured ones. The difference in the calculated CBF, OEF, and CMRO 2 values by the two methods was small (<10%) against the invasive method, and the values showed tight correlations ( r = 0.97). The simulation showed errors associated with the assumed parameters were less than ∼10%. Our results demonstrate that IDIFs can be reconstructed from tissue curves, suggesting the possibility of using a non-invasive technique to assess CBF, OEF, and CMRO 2 .

  18. MCNP-model for the OAEP Thai Research Reactor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gallmeier, F.X.; Tang, J.S.; Primm, R.T. III

    An MCNP input was prepared for the Thai Research Reactor, making extensive use of the MCNP geometry`s lattice feature that allows a flexible and easy rearrangement of the core components and the adjustment of the control elements. The geometry was checked for overdefined or undefined zones by two-dimensional plots of cuts through the core configuration with the MCNP geometry plotting capabilities, and by a three-dimensional view of the core configuration with the SABRINA code. Cross sections were defined for a hypothetical core of 67 standard fuel elements and 38 low-enriched uranium fuel elements--all filled with fresh fuel. Three test calculationsmore » were performed with the MCNP4B-code to obtain the multiplication factor for the cases with control elements fully inserted, fully withdrawn, and at a working position.« less

  19. An improved version of NCOREL: A computer program for 3-D nonlinear supersonic potential flow computations

    NASA Technical Reports Server (NTRS)

    Siclari, Michael J.

    1988-01-01

    A computer code called NCOREL (for Nonconical Relaxation) has been developed to solve for supersonic full potential flows over complex geometries. The method first solves for the conical at the apex and then marches downstream in a spherical coordinate system. Implicit relaxation techniques are used to numerically solve the full potential equation at each subsequent crossflow plane. Many improvements have been made to the original code including more reliable numerics for computing wing-body flows with multiple embedded shocks, inlet flow through simulation, wake model and entropy corrections. Line relaxation or approximate factorization schemes are optionally available. Improved internal grid generation using analytic conformal mappings, supported by a simple geometric Harris wave drag input that was originally developed for panel methods and internal geometry package are some of the new features.

  20. A universal deep learning approach for modeling the flow of patients under different severities.

    PubMed

    Jiang, Shancheng; Chin, Kwai-Sang; Tsui, Kwok L

    2018-02-01

    The Accident and Emergency Department (A&ED) is the frontline for providing emergency care in hospitals. Unfortunately, relative A&ED resources have failed to keep up with continuously increasing demand in recent years, which leads to overcrowding in A&ED. Knowing the fluctuation of patient arrival volume in advance is a significant premise to relieve this pressure. Based on this motivation, the objective of this study is to explore an integrated framework with high accuracy for predicting A&ED patient flow under different triage levels, by combining a novel feature selection process with deep neural networks. Administrative data is collected from an actual A&ED and categorized into five groups based on different triage levels. A genetic algorithm (GA)-based feature selection algorithm is improved and implemented as a pre-processing step for this time-series prediction problem, in order to explore key features affecting patient flow. In our improved GA, a fitness-based crossover is proposed to maintain the joint information of multiple features during iterative process, instead of traditional point-based crossover. Deep neural networks (DNN) is employed as the prediction model to utilize their universal adaptability and high flexibility. In the model-training process, the learning algorithm is well-configured based on a parallel stochastic gradient descent algorithm. Two effective regularization strategies are integrated in one DNN framework to avoid overfitting. All introduced hyper-parameters are optimized efficiently by grid-search in one pass. As for feature selection, our improved GA-based feature selection algorithm has outperformed a typical GA and four state-of-the-art feature selection algorithms (mRMR, SAFS, VIFR, and CFR). As for the prediction accuracy of proposed integrated framework, compared with other frequently used statistical models (GLM, seasonal-ARIMA, ARIMAX, and ANN) and modern machine models (SVM-RBF, SVM-linear, RF, and R-LASSO), the proposed integrated "DNN-I-GA" framework achieves higher prediction accuracy on both MAPE and RMSE metrics in pairwise comparisons. The contribution of our study is two-fold. Theoretically, the traditional GA-based feature selection process is improved to have less hyper-parameters and higher efficiency, and the joint information of multiple features is maintained by fitness-based crossover operator. The universal property of DNN is further enhanced by merging different regularization strategies. Practically, features selected by our improved GA can be used to acquire an underlying relationship between patient flows and input features. Predictive values are significant indicators of patients' demand and can be used by A&ED managers to make resource planning and allocation. High accuracy achieved by the present framework in different cases enhances the reliability of downstream decision makings. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. Portable data collection device

    DOEpatents

    French, P.D.

    1996-06-11

    The present invention provides a portable data collection device that has a variety of sensors that are interchangeable with a variety of input ports in the device. The various sensors include a data identification feature that provides information to the device regarding the type of physical data produced by each sensor and therefore the type of sensor itself. The data identification feature enables the device to locate the input port where the sensor is connected and self adjust when a sensor is removed or replaced. The device is able to collect physical data, whether or not a function of a time. 7 figs.

  2. Portable data collection device

    DOEpatents

    French, Patrick D.

    1996-01-01

    The present invention provides a portable data collection device that has a variety of sensors that are interchangeable with a variety of input ports in the device. The various sensors include a data identification feature that provides information to the device regarding the type of physical data produced by each sensor and therefore the type of sensor itself. The data identification feature enables the device to locate the input port where the sensor is connected and self adjust when a sensor is removed or replaced. The device is able to collect physical data, whether or not a function of a time.

  3. Computer Based Behavioral Biometric Authentication via Multi-Modal Fusion

    DTIC Science & Technology

    2013-03-01

    the decisions made by each individual modality. Fusion of features is the simple concatenation of feature vectors from multiple modalities to be...of Features BayesNet MDL 330 LibSVM PCA 80 J48 Wrapper Evaluator 11 3.5.3 Ensemble Based Decision Level Fusion. In ensemble learning multiple ...The high fusion percentages validate our hypothesis that by combining features from multiple modalities, classification accuracy can be improved. As

  4. Comparison of Theory and Experiment on Aeroacoustic Loads and Deflections

    NASA Astrophysics Data System (ADS)

    Campos, L. M. B. C.; Bourgine, A.; Bonomi, B.

    1999-01-01

    The correlation of acoustic pressure loads induced by a turbulent wake on a nearby structural panel is considered: this problem is relevant to the acoustic fatigue of aircraft, rocket and satellite structures. Both the correlation of acoustic pressure loads and the panel deflections, were measured in an 8-m diameter transonic wind tunnel. Using the measured correlation of acoustic pressures, as an input to a finite-element aeroelastic code, the panel response was reproduced. The latter was also satisfactorily reproduced, using again the aeroelastic code, with input given by a theoretical formula for the correlation of acoustic pressures; the derivation of this formula, and the semi-empirical parameters which appear in it, are included in this paper. The comparison of acoustic responses in aeroacoustic wind tunnels (AWT) and progressive wave tubes (PWT) shows that much work needs to be done to bridge that gap; this is important since the PWT is the standard test means, whereas the AWT is more representative of real flight conditions but also more demanding in resources. Since this may be the first instance of successful modelling of acoustic fatigue, it may be appropriate to list briefly the essential ``positive'' features and associated physical phenomena: (i) a standard aeroelastic structural code can predict acoustic fatigue, provided that the correlation of pressure loads be adequately specified; (ii) the correlation of pressure loads is determined by the interference of acoustic waves, which depends on the exact evaluation of multiple scattering integrals, involving the statistics of random phase shifts; (iii) for the relatively low frequencies (one to a few hundred Hz) of aeroacoustic fatigue, the main cause of random phase effects is scattering by irregular wakes, which are thin on wavelength scale, and appear as partially reflecting rough interfaces. It may also be appropriate to mention some of the ``negative'' features, to which may be attached illusory importance; (iv) deterministic flow features, even conspicuous or of large scale, such as convection, are not relevant to aeroacoustic fatigue, because they do not produce random phase shifts; (v) local turbulence, of scale much smaller than the wavelength of sound, cannot produce significant random phase shifts, and is also of little consequence to aeroacoustic fatigue; (vi) the precise location of sound sources can become of little consequence, after multiple scattering gives rise to a diffuse sound field; and (vii) there is not much ground for distinction between unsteady flow and sound waves, since at transonic speeds they are both associated with pressures fluctuating in time and space.

  5. Characteristic features of determining the labor input and estimated cost of the development and manufacture of equipment

    NASA Technical Reports Server (NTRS)

    Kurmanaliyev, T. I.; Breslavets, A. V.

    1974-01-01

    The difficulties in obtaining exact calculation data for the labor input and estimated cost are noted. The method of calculating the labor cost of the design work using the provisional normative indexes with respect to individual forms of operations is proposed. Values of certain coefficients recommended for use in the practical calculations of the labor input for the development of new scientific equipment for space research are presented.

  6. Generalized compliant motion primitive

    NASA Technical Reports Server (NTRS)

    Backes, Paul G. (Inventor)

    1994-01-01

    This invention relates to a general primitive for controlling a telerobot with a set of input parameters. The primitive includes a trajectory generator; a teleoperation sensor; a joint limit generator; a force setpoint generator; a dither function generator, which produces telerobot motion inputs in a common coordinate frame for simultaneous combination in sensor summers. Virtual return spring motion input is provided by a restoration spring subsystem. The novel features of this invention include use of a single general motion primitive at a remote site to permit the shared and supervisory control of the robot manipulator to perform tasks via a remotely transferred input parameter set.

  7. Performance Analysis of Triple Asymmetrical Optical Micro Ring Resonator with 2 × 2 Input-Output Bus Waveguide

    NASA Astrophysics Data System (ADS)

    Ranjan, Suman; Mandal, Sanjoy

    2017-12-01

    Modeling of triple asymmetrical optical micro ring resonator (TAOMRR) in z-domain with 2 × 2 input-output system with detailed design of its waveguide configuration using finite-difference time-domain (FDTD) method is presented. Transfer function in z-domain using delay-line signal processing technique of the proposed TAOMRR is determined for different input and output ports. The frequency response analysis is carried out using MATLAB software. Group delay and dispersion characteristics are also determined in MATLAB. The electric field analysis is done using FDTD. The method proposes a new methodology to design and draw multiple configurations of coupled ring resonators having multiple in and out ports. Various important parameters such as coupling coefficients and FSR are also determined.

  8. Performance Analysis of Triple Asymmetrical Optical Micro Ring Resonator with 2 × 2 Input-Output Bus Waveguide

    NASA Astrophysics Data System (ADS)

    Ranjan, Suman; Mandal, Sanjoy

    2018-02-01

    Modeling of triple asymmetrical optical micro ring resonator (TAOMRR) in z-domain with 2 × 2 input-output system with detailed design of its waveguide configuration using finite-difference time-domain (FDTD) method is presented. Transfer function in z-domain using delay-line signal processing technique of the proposed TAOMRR is determined for different input and output ports. The frequency response analysis is carried out using MATLAB software. Group delay and dispersion characteristics are also determined in MATLAB. The electric field analysis is done using FDTD. The method proposes a new methodology to design and draw multiple configurations of coupled ring resonators having multiple in and out ports. Various important parameters such as coupling coefficients and FSR are also determined.

  9. Intelligent robotic tracker

    NASA Technical Reports Server (NTRS)

    Otaguro, W. S.; Kesler, L. O.; Land, K. C.; Rhoades, D. E.

    1987-01-01

    An intelligent tracker capable of robotic applications requiring guidance and control of platforms, robotic arms, and end effectors has been developed. This packaged system capable of supervised autonomous robotic functions is partitioned into a multiple processor/parallel processing configuration. The system currently interfaces to cameras but has the capability to also use three-dimensional inputs from scanning laser rangers. The inputs are fed into an image processing and tracking section where the camera inputs are conditioned for the multiple tracker algorithms. An executive section monitors the image processing and tracker outputs and performs all the control and decision processes. The present architecture of the system is presented with discussion of its evolutionary growth for space applications. An autonomous rendezvous demonstration of this system was performed last year. More realistic demonstrations in planning are discussed.

  10. Output transformations and separation results for feedback linearisable delay systems

    NASA Astrophysics Data System (ADS)

    Cacace, F.; Conte, F.; Germani, A.

    2018-04-01

    The class of strict-feedback systems enjoys special properties that make it similar to linear systems. This paper proves that such a class is equivalent, under a change of coordinates, to the wider class of feedback linearisable systems with multiplicative input, when the multiplicative terms are functions of the measured variables only. We apply this result to the control problem of feedback linearisable nonlinear MIMO systems with input and/or output delays. In this way, we provide sufficient conditions under which a separation result holds for output feedback control and moreover a predictor-based controller exists. When these conditions are satisfied, we obtain that the existence of stabilising controllers for arbitrarily large delays in the input and/or the output can be proved for a wider class of systems than previously known.

  11. An efficient approach to ARMA modeling of biological systems with multiple inputs and delays

    NASA Technical Reports Server (NTRS)

    Perrott, M. H.; Cohen, R. J.

    1996-01-01

    This paper presents a new approach to AutoRegressive Moving Average (ARMA or ARX) modeling which automatically seeks the best model order to represent investigated linear, time invariant systems using their input/output data. The algorithm seeks the ARMA parameterization which accounts for variability in the output of the system due to input activity and contains the fewest number of parameters required to do so. The unique characteristics of the proposed system identification algorithm are its simplicity and efficiency in handling systems with delays and multiple inputs. We present results of applying the algorithm to simulated data and experimental biological data In addition, a technique for assessing the error associated with the impulse responses calculated from estimated ARMA parameterizations is presented. The mapping from ARMA coefficients to impulse response estimates is nonlinear, which complicates any effort to construct confidence bounds for the obtained impulse responses. Here a method for obtaining a linearization of this mapping is derived, which leads to a simple procedure to approximate the confidence bounds.

  12. Nonstationary multivariate modeling of cerebral autoregulation during hypercapnia.

    PubMed

    Kostoglou, Kyriaki; Debert, Chantel T; Poulin, Marc J; Mitsis, Georgios D

    2014-05-01

    We examined the time-varying characteristics of cerebral autoregulation and hemodynamics during a step hypercapnic stimulus by using recursively estimated multivariate (two-input) models which quantify the dynamic effects of mean arterial blood pressure (ABP) and end-tidal CO2 tension (PETCO2) on middle cerebral artery blood flow velocity (CBFV). Beat-to-beat values of ABP and CBFV, as well as breath-to-breath values of PETCO2 during baseline and sustained euoxic hypercapnia were obtained in 8 female subjects. The multiple-input, single-output models used were based on the Laguerre expansion technique, and their parameters were updated using recursive least squares with multiple forgetting factors. The results reveal the presence of nonstationarities that confirm previously reported effects of hypercapnia on autoregulation, i.e. a decrease in the MABP phase lead, and suggest that the incorporation of PETCO2 as an additional model input yields less time-varying estimates of dynamic pressure autoregulation obtained from single-input (ABP-CBFV) models. Copyright © 2013 IPEM. Published by Elsevier Ltd. All rights reserved.

  13. Entanglement enhancement in multimode integrated circuits

    NASA Astrophysics Data System (ADS)

    Léger, Zacharie M.; Brodutch, Aharon; Helmy, Amr S.

    2018-06-01

    The faithful distribution of entanglement in continuous-variable systems is essential to many quantum information protocols. As such, entanglement distillation and enhancement schemes are a cornerstone of many applications. The photon subtraction scheme offers enhancement with a relatively simple setup and has been studied in various scenarios. Motivated by recent advances in integrated optics, particularly the ability to build stable multimode interferometers with squeezed input states, a multimodal extension to the enhancement via photon subtraction protocol is studied. States generated with multiple squeezed input states, rather than a single input source, are shown to be more sensitive to the enhancement protocol, leading to increased entanglement at the output. Numerical results show the gain in entanglement is not monotonic with the number of modes or the degree of squeezing in the additional modes. Consequently, the advantage due to having multiple squeezed input states can be maximized when the number of modes is still relatively small (e.g., four). The requirement for additional squeezing is within the current realm of implementation, making this scheme achievable with present technologies.

  14. Robust input design for nonlinear dynamic modeling of AUV.

    PubMed

    Nouri, Nowrouz Mohammad; Valadi, Mehrdad

    2017-09-01

    Input design has a dominant role in developing the dynamic model of autonomous underwater vehicles (AUVs) through system identification. Optimal input design is the process of generating informative inputs that can be used to generate the good quality dynamic model of AUVs. In a problem with optimal input design, the desired input signal depends on the unknown system which is intended to be identified. In this paper, the input design approach which is robust to uncertainties in model parameters is used. The Bayesian robust design strategy is applied to design input signals for dynamic modeling of AUVs. The employed approach can design multiple inputs and apply constraints on an AUV system's inputs and outputs. Particle swarm optimization (PSO) is employed to solve the constraint robust optimization problem. The presented algorithm is used for designing the input signals for an AUV, and the estimate obtained by robust input design is compared with that of the optimal input design. According to the results, proposed input design can satisfy both robustness of constraints and optimality. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  15. Evaluation of the ACEC Benchmark Suite for Real-Time Applications

    DTIC Science & Technology

    1990-07-23

    1.0 benchmark suite waSanalyzed with respect to its measuring of Ada real-time features such as tasking, memory management, input/output, scheduling...and delay statement, Chapter 13 features , pragmas, interrupt handling, subprogram overhead, numeric computations etc. For most of the features that...meant for programming real-time systems. The ACEC benchmarks have been analyzed extensively with respect to their measuring of Ada real-time features

  16. Compact waveguide power divider with multiple isolated outputs

    DOEpatents

    Moeller, Charles P.

    1987-01-01

    A waveguide power divider (10) for splitting electromagnetic microwave power and directionally coupling the divided power includes an input waveguide (21) and reduced height output waveguides (23) interconnected by axial slots (22) and matched loads (25) and (26) positioned at the unused ends of input and output guides (21) and (23) respectively. The axial slots are of a length such that the wave in the input waveguide (21) is directionally coupled to the output waveguides (23). The widths of input guide (21) and output guides (23) are equal and the width of axial slots (22) is one half of the width of the input guide (21).

  17. A compound memristive synapse model for statistical learning through STDP in spiking neural networks

    PubMed Central

    Bill, Johannes; Legenstein, Robert

    2014-01-01

    Memristors have recently emerged as promising circuit elements to mimic the function of biological synapses in neuromorphic computing. The fabrication of reliable nanoscale memristive synapses, that feature continuous conductance changes based on the timing of pre- and postsynaptic spikes, has however turned out to be challenging. In this article, we propose an alternative approach, the compound memristive synapse, that circumvents this problem by the use of memristors with binary memristive states. A compound memristive synapse employs multiple bistable memristors in parallel to jointly form one synapse, thereby providing a spectrum of synaptic efficacies. We investigate the computational implications of synaptic plasticity in the compound synapse by integrating the recently observed phenomenon of stochastic filament formation into an abstract model of stochastic switching. Using this abstract model, we first show how standard pulsing schemes give rise to spike-timing dependent plasticity (STDP) with a stabilizing weight dependence in compound synapses. In a next step, we study unsupervised learning with compound synapses in networks of spiking neurons organized in a winner-take-all architecture. Our theoretical analysis reveals that compound-synapse STDP implements generalized Expectation-Maximization in the spiking network. Specifically, the emergent synapse configuration represents the most salient features of the input distribution in a Mixture-of-Gaussians generative model. Furthermore, the network's spike response to spiking input streams approximates a well-defined Bayesian posterior distribution. We show in computer simulations how such networks learn to represent high-dimensional distributions over images of handwritten digits with high fidelity even in presence of substantial device variations and under severe noise conditions. Therefore, the compound memristive synapse may provide a synaptic design principle for future neuromorphic architectures. PMID:25565943

  18. Adaptive non-predictor control of lower triangular uncertain nonlinear systems with an unknown time-varying delay in the input

    NASA Astrophysics Data System (ADS)

    Koo, Min-Sung; Choi, Ho-Lim

    2018-01-01

    In this paper, we consider a control problem for a class of uncertain nonlinear systems in which there exists an unknown time-varying delay in the input and lower triangular nonlinearities. Usually, in the existing results, input delays have been coupled with feedforward (or upper triangular) nonlinearities; in other words, the combination of lower triangular nonlinearities and input delay has been rare. Motivated by the existing controller for input-delayed chain of integrators with nonlinearity, we show that the control of input-delayed nonlinear systems with two particular types of lower triangular nonlinearities can be done. As a control solution, we propose a newly designed feedback controller whose main features are its dynamic gain and non-predictor approach. Three examples are given for illustration.

  19. Case Assignment in Typically Developing English-Speaking Children: A Paired Priming Study

    ERIC Educational Resources Information Center

    Wisman Weil, Lisa Marie

    2013-01-01

    This study utilized a paired priming paradigm to examine the influence of input features on case assignment in typically developing English-speaking children. The Input Ambiguity Hypothesis (Pelham, 2011) was experimentally tested to help explain why children produce subject pronoun case errors. Analyses of third singular "-s" marking on…

  20. Input-Based Grammar Pedagogy: A Comparison of Two Possibilities

    ERIC Educational Resources Information Center

    Marsden, Emma

    2005-01-01

    This article presents arguments for using listening and reading activities as an option for techniques in grammar pedagogy. It describes two possible approaches: Processing Instruction (PI) and Enriched Input (EI), and examples of their key features are included in the appendices. The article goes on to report on a classroom based quasi-experiment…

  1. Determining the Effectiveness of Visual Input Enhancement across Multiple Linguistic Cues

    ERIC Educational Resources Information Center

    Comeaux, Ian; McDonald, Janet L.

    2018-01-01

    Visual input enhancement (VIE) increases the salience of grammatical forms, potentially facilitating acquisition through attention mechanisms. Native English speakers were exposed to an artificial language containing four linguistic cues (verb agreement, case marking, animacy, word order), with morphological cues either unmarked, marked in the…

  2. DOE-2 sample run book: Version 2.1E

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Winkelmann, F.C.; Birdsall, B.E.; Buhl, W.F.

    1993-11-01

    The DOE-2 Sample Run Book shows inputs and outputs for a variety of building and system types. The samples start with a simple structure and continue to a high-rise office building, a medical building, three small office buildings, a bar/lounge, a single-family residence, a small office building with daylighting, a single family residence with an attached sunspace, a ``parameterized`` building using input macros, and a metric input/output example. All of the samples use Chicago TRY weather. The main purpose of the Sample Run Book is instructional. It shows the relationship of LOADS-SYSTEMS-PLANT-ECONOMICS inputs, displays various input styles, and illustrates manymore » of the basic and advanced features of the program. Many of the sample runs are preceded by a sketch of the building showing its general appearance and the zoning used in the input. In some cases we also show a 3-D rendering of the building as produced by the program DrawBDL. Descriptive material has been added as comments in the input itself. We find that a number of users have loaded these samples onto their editing systems and use them as ``templates`` for creating new inputs. Another way of using them would be to store various portions as files that can be read into the input using the {number_sign}{number_sign} include command, which is part of the Input Macro feature introduced in version DOE-2.lD. Note that the energy rate structures here are the same as in the DOE-2.lD samples, but have been rewritten using the new DOE-2.lE commands and keywords for ECONOMICS. The samples contained in this report are the same as those found on the DOE-2 release files. However, the output numbers that appear here may differ slightly from those obtained from the release files. The output on the release files can be used as a check set to compare results on your computer.« less

  3. Near real-time traffic routing

    NASA Technical Reports Server (NTRS)

    Yang, Chaowei (Inventor); Xie, Jibo (Inventor); Zhou, Bin (Inventor); Cao, Ying (Inventor)

    2012-01-01

    A near real-time physical transportation network routing system comprising: a traffic simulation computing grid and a dynamic traffic routing service computing grid. The traffic simulator produces traffic network travel time predictions for a physical transportation network using a traffic simulation model and common input data. The physical transportation network is divided into a multiple sections. Each section has a primary zone and a buffer zone. The traffic simulation computing grid includes multiple of traffic simulation computing nodes. The common input data includes static network characteristics, an origin-destination data table, dynamic traffic information data and historical traffic data. The dynamic traffic routing service computing grid includes multiple dynamic traffic routing computing nodes and generates traffic route(s) using the traffic network travel time predictions.

  4. Experimental demonstration of MIMO-OFDM underwater wireless optical communication

    NASA Astrophysics Data System (ADS)

    Song, Yuhang; Lu, Weichao; Sun, Bin; Hong, Yang; Qu, Fengzhong; Han, Jun; Zhang, Wei; Xu, Jing

    2017-11-01

    In this paper, we propose and experimentally demonstrate a multiple-input multiple-output orthogonal frequency division multiplexing (MIMO-OFDM) underwater wireless optical communication (UWOC) system, with a gross bit rate of 33.691 Mb/s over a 2-m water channel using low-cost blue light-emitting-diodes (LEDs) and 10-MHz PIN photodiodes. The system is capable of realizing robust data transmission within a relatively large reception area, leading to relaxed alignment requirement for UWOC. In addition, we have compared the system performance of repetition coding OFDM (RC-OFDM), Alamouti-OFDM and multiple-input single-output OFDM (MISO-OFDM) in turbid water. Results show that the Alamouti-OFDM UWOC is more resistant to delay than the RC-OFDM-based system.

  5. Retinal Origin of Direction Selectivity in the Superior Colliculus

    PubMed Central

    Shi, Xuefeng; Barchini, Jad; Ledesma, Hector Acaron; Koren, David; Jin, Yanjiao; Liu, Xiaorong; Wei, Wei; Cang, Jianhua

    2017-01-01

    Detecting visual features in the environment such as motion direction is crucial for survival. The circuit mechanisms that give rise to direction selectivity in a major visual center, the superior colliculus (SC), are entirely unknown. Here, we optogenetically isolate the retinal inputs that individual direction-selective SC neurons receive and find that they are already selective as a result of precisely converging inputs from similarly-tuned retinal ganglion cells. The direction selective retinal input is linearly amplified by the intracollicular circuits without changing its preferred direction or level of selectivity. Finally, using 2-photon calcium imaging, we show that SC direction selectivity is dramatically reduced in transgenic mice that have decreased retinal selectivity. Together, our studies demonstrate a retinal origin of direction selectivity in the SC, and reveal a central visual deficit as a consequence of altered feature selectivity in the retina. PMID:28192394

  6. Combining convolutional neural networks and Hough Transform for classification of images containing lines

    NASA Astrophysics Data System (ADS)

    Sheshkus, Alexander; Limonova, Elena; Nikolaev, Dmitry; Krivtsov, Valeriy

    2017-03-01

    In this paper, we propose an expansion of convolutional neural network (CNN) input features based on Hough Transform. We perform morphological contrasting of source image followed by Hough Transform, and then use it as input for some convolutional filters. Thus, CNNs computational complexity and the number of units are not affected. Morphological contrasting and Hough Transform are the only additional computational expenses of introduced CNN input features expansion. Proposed approach was demonstrated on the example of CNN with very simple structure. We considered two image recognition problems, that were object classification on CIFAR-10 and printed character recognition on private dataset with symbols taken from Russian passports. Our approach allowed to reach noticeable accuracy improvement without taking much computational effort, which can be extremely important in industrial recognition systems or difficult problems utilising CNNs, like pressure ridge analysis and classification.

  7. Nitrogen budget in a lowland coastal area within the Po River basin (northern Italy): multiple evidences of equilibrium between sources and internal sinks.

    PubMed

    Castaldelli, Giuseppe; Soana, Elisa; Racchetti, Erica; Pierobon, Enrica; Mastrocicco, Micol; Tesini, Enrico; Fano, Elisa Anna; Bartoli, Marco

    2013-09-01

    Detailed studies on pollutants genesis, path and transformation are needed in agricultural catchments facing coastal areas. Here, loss of nutrients should be minimized in order to protect valuable aquatic ecosystems from eutrophication phenomena. A soil system N budget was calculated for a lowland coastal area, the Po di Volano basin (Po River Delta, Northern Italy), characterized by extremely flat topography and fine soil texture and bordering a network of lagoon ecosystems. Main features of this area are the scarce relevance of livestock farming, the intense agriculture, mainly sustained by chemical fertilizers, and the developed network of artificial canals with long water residence time. Average nitrogen input exceeds output terms by ~60 kg N ha(-1) year(-1), a relatively small amount if compared to sub-basins of the same hydrological system. Analysis of dissolved inorganic nitrogen in groundwater suggests limited vertical loss and no accumulation of this element, while a nitrogen mass balance in surface waters indicates a net and significant removal within the watershed. Our data provide multiple evidences of efficient control of the nitrogen excess in this geographical area and we speculate that denitrification in soil and in the secondary drainage system performs this ecosystemic function. Additionally, the significant difference between nitrogen input and nitrogen output loads associated to the irrigation system, which is fed by the N-rich Po River, suggests that this basin metabolizes part of the nitrogen excess produced upstream. The traditionally absent livestock farming practices and consequent low use of manure as fertilizer pose the risk of excess soil mineralization and progressive loss of denitrification capacity in this area.

  8. Stability and performance of propulsion control systems with distributed control architectures and failures

    NASA Astrophysics Data System (ADS)

    Belapurkar, Rohit K.

    Future aircraft engine control systems will be based on a distributed architecture, in which, the sensors and actuators will be connected to the Full Authority Digital Engine Control (FADEC) through an engine area network. Distributed engine control architecture will allow the implementation of advanced, active control techniques along with achieving weight reduction, improvement in performance and lower life cycle cost. The performance of a distributed engine control system is predominantly dependent on the performance of the communication network. Due to the serial data transmission policy, network-induced time delays and sampling jitter are introduced between the sensor/actuator nodes and the distributed FADEC. Communication network faults and transient node failures may result in data dropouts, which may not only degrade the control system performance but may even destabilize the engine control system. Three different architectures for a turbine engine control system based on a distributed framework are presented. A partially distributed control system for a turbo-shaft engine is designed based on ARINC 825 communication protocol. Stability conditions and control design methodology are developed for the proposed partially distributed turbo-shaft engine control system to guarantee the desired performance under the presence of network-induced time delay and random data loss due to transient sensor/actuator failures. A fault tolerant control design methodology is proposed to benefit from the availability of an additional system bandwidth and from the broadcast feature of the data network. It is shown that a reconfigurable fault tolerant control design can help to reduce the performance degradation in presence of node failures. A T-700 turbo-shaft engine model is used to validate the proposed control methodology based on both single input and multiple-input multiple-output control design techniques.

  9. Dynamics of nonlinear feedback control.

    PubMed

    Snippe, H P; van Hateren, J H

    2007-05-01

    Feedback control in neural systems is ubiquitous. Here we study the mathematics of nonlinear feedback control. We compare models in which the input is multiplied by a dynamic gain (multiplicative control) with models in which the input is divided by a dynamic attenuation (divisive control). The gain signal (resp. the attenuation signal) is obtained through a concatenation of an instantaneous nonlinearity and a linear low-pass filter operating on the output of the feedback loop. For input steps, the dynamics of gain and attenuation can be very different, depending on the mathematical form of the nonlinearity and the ordering of the nonlinearity and the filtering in the feedback loop. Further, the dynamics of feedback control can be strongly asymmetrical for increment versus decrement steps of the input. Nevertheless, for each of the models studied, the nonlinearity in the feedback loop can be chosen such that immediately after an input step, the dynamics of feedback control is symmetric with respect to increments versus decrements. Finally, we study the dynamics of the output of the control loops and find conditions under which overshoots and undershoots of the output relative to the steady-state output occur when the models are stimulated with low-pass filtered steps. For small steps at the input, overshoots and undershoots of the output do not occur when the filtering in the control path is faster than the low-pass filtering at the input. For large steps at the input, however, results depend on the model, and for some of the models, multiple overshoots and undershoots can occur even with a fast control path.

  10. A Neural-Dynamic Architecture for Concurrent Estimation of Object Pose and Identity

    PubMed Central

    Lomp, Oliver; Faubel, Christian; Schöner, Gregor

    2017-01-01

    Handling objects or interacting with a human user about objects on a shared tabletop requires that objects be identified after learning from a small number of views and that object pose be estimated. We present a neurally inspired architecture that learns object instances by storing features extracted from a single view of each object. Input features are color and edge histograms from a localized area that is updated during processing. The system finds the best-matching view for the object in a novel input image while concurrently estimating the object’s pose, aligning the learned view with current input. The system is based on neural dynamics, computationally operating in real time, and can handle dynamic scenes directly off live video input. In a scenario with 30 everyday objects, the system achieves recognition rates of 87.2% from a single training view for each object, while also estimating pose quite precisely. We further demonstrate that the system can track moving objects, and that it can segment the visual array, selecting and recognizing one object while suppressing input from another known object in the immediate vicinity. Evaluation on the COIL-100 dataset, in which objects are depicted from different viewing angles, revealed recognition rates of 91.1% on the first 30 objects, each learned from four training views. PMID:28503145

  11. Estimation of surface heat and moisture fluxes over a prairie grassland. II - Two-dimensional time filtering and site variability

    NASA Technical Reports Server (NTRS)

    Crosson, William L.; Smith, Eric A.

    1992-01-01

    The behavior of in situ measurements of surface fluxes obtained during FIFE 1987 is examined by using correlative and spectral techniques in order to assess the significance of fluctuations on various time scales, from subdiurnal up to synoptic, intraseasonal, and annual scales. The objectives of this analysis are: (1) to determine which temporal scales have a significant impact on areal averaged fluxes and (2) to design a procedure for filtering an extended flux time series that preserves the basic diurnal features and longer time scales while removing high frequency noise that cannot be attributed to site-induced variation. These objectives are accomplished through the use of a two-dimensional cross-time Fourier transform, which serves to separate processes inherently related to diurnal and subdiurnal variability from those which impact flux variations on the longer time scales. A filtering procedure is desirable before the measurements are utilized as input with an experimental biosphere model, to insure that model based intercomparisons at multiple sites are uncontaminated by input variance not related to true site behavior. Analysis of the spectral decomposition indicates that subdiurnal time scales having periods shorter than 6 hours have little site-to-site consistency and therefore little impact on areal integrated fluxes.

  12. OpenPET: A Flexible Electronics System for Radiotracer Imaging

    NASA Astrophysics Data System (ADS)

    Moses, W. W.; Buckley, S.; Vu, C.; Peng, Q.; Pavlov, N.; Choong, W.-S.; Wu, J.; Jackson, C.

    2010-10-01

    We present the design for OpenPET, an electronics readout system designed for prototype radiotracer imaging instruments. The critical requirements are that it has sufficient performance, channel count, channel density, and power consumption to service a complete camera, and yet be simple, flexible, and customizable enough to be used with almost any detector or camera design. An important feature of this system is that each analog input is processed independently. Each input can be configured to accept signals of either polarity as well as either differential or ground referenced signals. Each signal is digitized by a continuously sampled ADC, which is processed by an FPGA to extract pulse height information. A leading edge discriminator creates a timing edge that is “time stamped” by a TDC implemented inside the FPGA. This digital information from each channel is sent to an FPGA that services 16 analog channels, and information from multiple channels is processed by this FPGA to perform logic for crystal lookup, DOI calculation, calibration, etc. As all of this processing is controlled by firmware and software, it can be modified/customized easily. The system is open source, meaning that all technical data (specifications, schematics and board layout files, source code, and instructions) will be publicly available.

  13. Multi-Target Angle Tracking Algorithm for Bistatic Multiple-Input Multiple-Output (MIMO) Radar Based on the Elements of the Covariance Matrix.

    PubMed

    Zhang, Zhengyan; Zhang, Jianyun; Zhou, Qingsong; Li, Xiaobo

    2018-03-07

    In this paper, we consider the problem of tracking the direction of arrivals (DOA) and the direction of departure (DOD) of multiple targets for bistatic multiple-input multiple-output (MIMO) radar. A high-precision tracking algorithm for target angle is proposed. First, the linear relationship between the covariance matrix difference and the angle difference of the adjacent moment was obtained through three approximate relations. Then, the proposed algorithm obtained the relationship between the elements in the covariance matrix difference. On this basis, the performance of the algorithm was improved by averaging the covariance matrix element. Finally, the least square method was used to estimate the DOD and DOA. The algorithm realized the automatic correlation of the angle and provided better performance when compared with the adaptive asymmetric joint diagonalization (AAJD) algorithm. The simulation results demonstrated the effectiveness of the proposed algorithm. The algorithm provides the technical support for the practical application of MIMO radar.

  14. Machine learning strategies for systems with invariance properties

    NASA Astrophysics Data System (ADS)

    Ling, Julia; Jones, Reese; Templeton, Jeremy

    2016-08-01

    In many scientific fields, empirical models are employed to facilitate computational simulations of engineering systems. For example, in fluid mechanics, empirical Reynolds stress closures enable computationally-efficient Reynolds Averaged Navier Stokes simulations. Likewise, in solid mechanics, constitutive relations between the stress and strain in a material are required in deformation analysis. Traditional methods for developing and tuning empirical models usually combine physical intuition with simple regression techniques on limited data sets. The rise of high performance computing has led to a growing availability of high fidelity simulation data. These data open up the possibility of using machine learning algorithms, such as random forests or neural networks, to develop more accurate and general empirical models. A key question when using data-driven algorithms to develop these empirical models is how domain knowledge should be incorporated into the machine learning process. This paper will specifically address physical systems that possess symmetry or invariance properties. Two different methods for teaching a machine learning model an invariance property are compared. In the first method, a basis of invariant inputs is constructed, and the machine learning model is trained upon this basis, thereby embedding the invariance into the model. In the second method, the algorithm is trained on multiple transformations of the raw input data until the model learns invariance to that transformation. Results are discussed for two case studies: one in turbulence modeling and one in crystal elasticity. It is shown that in both cases embedding the invariance property into the input features yields higher performance at significantly reduced computational training costs.

  15. Machine learning strategies for systems with invariance properties

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ling, Julia; Jones, Reese E.; Templeton, Jeremy Alan

    Here, in many scientific fields, empirical models are employed to facilitate computational simulations of engineering systems. For example, in fluid mechanics, empirical Reynolds stress closures enable computationally-efficient Reynolds-Averaged Navier-Stokes simulations. Likewise, in solid mechanics, constitutive relations between the stress and strain in a material are required in deformation analysis. Traditional methods for developing and tuning empirical models usually combine physical intuition with simple regression techniques on limited data sets. The rise of high-performance computing has led to a growing availability of high-fidelity simulation data, which open up the possibility of using machine learning algorithms, such as random forests or neuralmore » networks, to develop more accurate and general empirical models. A key question when using data-driven algorithms to develop these models is how domain knowledge should be incorporated into the machine learning process. This paper will specifically address physical systems that possess symmetry or invariance properties. Two different methods for teaching a machine learning model an invariance property are compared. In the first , a basis of invariant inputs is constructed, and the machine learning model is trained upon this basis, thereby embedding the invariance into the model. In the second method, the algorithm is trained on multiple transformations of the raw input data until the model learns invariance to that transformation. Results are discussed for two case studies: one in turbulence modeling and one in crystal elasticity. It is shown that in both cases embedding the invariance property into the input features yields higher performance with significantly reduced computational training costs.« less

  16. Machine learning strategies for systems with invariance properties

    DOE PAGES

    Ling, Julia; Jones, Reese E.; Templeton, Jeremy Alan

    2016-05-06

    Here, in many scientific fields, empirical models are employed to facilitate computational simulations of engineering systems. For example, in fluid mechanics, empirical Reynolds stress closures enable computationally-efficient Reynolds-Averaged Navier-Stokes simulations. Likewise, in solid mechanics, constitutive relations between the stress and strain in a material are required in deformation analysis. Traditional methods for developing and tuning empirical models usually combine physical intuition with simple regression techniques on limited data sets. The rise of high-performance computing has led to a growing availability of high-fidelity simulation data, which open up the possibility of using machine learning algorithms, such as random forests or neuralmore » networks, to develop more accurate and general empirical models. A key question when using data-driven algorithms to develop these models is how domain knowledge should be incorporated into the machine learning process. This paper will specifically address physical systems that possess symmetry or invariance properties. Two different methods for teaching a machine learning model an invariance property are compared. In the first , a basis of invariant inputs is constructed, and the machine learning model is trained upon this basis, thereby embedding the invariance into the model. In the second method, the algorithm is trained on multiple transformations of the raw input data until the model learns invariance to that transformation. Results are discussed for two case studies: one in turbulence modeling and one in crystal elasticity. It is shown that in both cases embedding the invariance property into the input features yields higher performance with significantly reduced computational training costs.« less

  17. Deep Convolutional Neural Networks for Multi-Modality Isointense Infant Brain Image Segmentation

    PubMed Central

    Zhang, Wenlu; Li, Rongjian; Deng, Houtao; Wang, Li; Lin, Weili; Ji, Shuiwang; Shen, Dinggang

    2015-01-01

    The segmentation of infant brain tissue images into white matter (WM), gray matter (GM), and cerebrospinal fluid (CSF) plays an important role in studying early brain development in health and disease. In the isointense stage (approximately 6–8 months of age), WM and GM exhibit similar levels of intensity in both T1 and T2 MR images, making the tissue segmentation very challenging. Only a small number of existing methods have been designed for tissue segmentation in this isointense stage; however, they only used a single T1 or T2 images, or the combination of T1 and T2 images. In this paper, we propose to use deep convolutional neural networks (CNNs) for segmenting isointense stage brain tissues using multi-modality MR images. CNNs are a type of deep models in which trainable filters and local neighborhood pooling operations are applied alternatingly on the raw input images, resulting in a hierarchy of increasingly complex features. Specifically, we used multimodality information from T1, T2, and fractional anisotropy (FA) images as inputs and then generated the segmentation maps as outputs. The multiple intermediate layers applied convolution, pooling, normalization, and other operations to capture the highly nonlinear mappings between inputs and outputs. We compared the performance of our approach with that of the commonly used segmentation methods on a set of manually segmented isointense stage brain images. Results showed that our proposed model significantly outperformed prior methods on infant brain tissue segmentation. In addition, our results indicated that integration of multi-modality images led to significant performance improvement. PMID:25562829

  18. Fold-change detection and scalar symmetry of sensory input fields.

    PubMed

    Shoval, Oren; Goentoro, Lea; Hart, Yuval; Mayo, Avi; Sontag, Eduardo; Alon, Uri

    2010-09-07

    Recent studies suggest that certain cellular sensory systems display fold-change detection (FCD): a response whose entire shape, including amplitude and duration, depends only on fold changes in input and not on absolute levels. Thus, a step change in input from, for example, level 1 to 2 gives precisely the same dynamical output as a step from level 2 to 4, because the steps have the same fold change. We ask what the benefit of FCD is and show that FCD is necessary and sufficient for sensory search to be independent of multiplying the input field by a scalar. Thus, the FCD search pattern depends only on the spatial profile of the input and not on its amplitude. Such scalar symmetry occurs in a wide range of sensory inputs, such as source strength multiplying diffusing/convecting chemical fields sensed in chemotaxis, ambient light multiplying the contrast field in vision, and protein concentrations multiplying the output in cellular signaling systems. Furthermore, we show that FCD entails two features found across sensory systems, exact adaptation and Weber's law, but that these two features are not sufficient for FCD. Finally, we present a wide class of mechanisms that have FCD, including certain nonlinear feedback and feed-forward loops. We find that bacterial chemotaxis displays feedback within the present class and hence, is expected to show FCD. This can explain experiments in which chemotaxis searches are insensitive to attractant source levels. This study, thus, suggests a connection between properties of biological sensory systems and scalar symmetry stemming from physical properties of their input fields.

  19. Substantial soil organic carbon retention along floodplains of mountain streams

    NASA Astrophysics Data System (ADS)

    Sutfin, Nicholas A.; Wohl, Ellen

    2017-07-01

    Small, snowmelt-dominated mountain streams have the potential to store substantial organic carbon in floodplain sediment because of high inputs of particulate organic matter, relatively lower temperatures compared with lowland regions, and potential for increased moisture conditions. This work (i) quantifies mean soil organic carbon (OC) content along 24 study reaches in the Colorado Rocky Mountains using 660 soil samples, (ii) identifies potential controls of OC content based on soil properties and spatial position with respect to the channel, and (iii) and examines soil properties and OC across various floodplain geomorphic features in the study area. Stepwise multiple linear regression (adjusted r2 = 0.48, p < 0.001) indicates that percentage of silt and clay, sample depth, percent sand, distance from the channel, and relative elevation from the channel are significant predictors of OC content in the study area. Principle component analysis indicates limited separation between geomorphic floodplain features based on predictors of OC content. A lack of significant differences among floodplain features suggests that the systematic random sampling employed in this study can capture the variability of OC across floodplains in the study area. Mean floodplain OC (6.3 ± 0.3%) is more variable but on average greater than values in uplands (1.5 ± 0.08% to 2.2 ± 0.14%) of the Colorado Front Range and higher than published values from floodplains in other regions, particularly those of larger rivers.

  20. Model-Free Primitive-Based Iterative Learning Control Approach to Trajectory Tracking of MIMO Systems With Experimental Validation.

    PubMed

    Radac, Mircea-Bogdan; Precup, Radu-Emil; Petriu, Emil M

    2015-11-01

    This paper proposes a novel model-free trajectory tracking of multiple-input multiple-output (MIMO) systems by the combination of iterative learning control (ILC) and primitives. The optimal trajectory tracking solution is obtained in terms of previously learned solutions to simple tasks called primitives. The library of primitives that are stored in memory consists of pairs of reference input/controlled output signals. The reference input primitives are optimized in a model-free ILC framework without using knowledge of the controlled process. The guaranteed convergence of the learning scheme is built upon a model-free virtual reference feedback tuning design of the feedback decoupling controller. Each new complex trajectory to be tracked is decomposed into the output primitives regarded as basis functions. The optimal reference input for the control system to track the desired trajectory is next recomposed from the reference input primitives. This is advantageous because the optimal reference input is computed straightforward without the need to learn from repeated executions of the tracking task. In addition, the optimization problem specific to trajectory tracking of square MIMO systems is decomposed in a set of optimization problems assigned to each separate single-input single-output control channel that ensures a convenient model-free decoupling. The new model-free primitive-based ILC approach is capable of planning, reasoning, and learning. A case study dealing with the model-free control tuning for a nonlinear aerodynamic system is included to validate the new approach. The experimental results are given.

  1. Evaluating the Sensitivity of Agricultural Model Performance to Different Climate Inputs: Supplemental Material

    NASA Technical Reports Server (NTRS)

    Glotter, Michael J.; Ruane, Alex C.; Moyer, Elisabeth J.; Elliott, Joshua W.

    2015-01-01

    Projections of future food production necessarily rely on models, which must themselves be validated through historical assessments comparing modeled and observed yields. Reliable historical validation requires both accurate agricultural models and accurate climate inputs. Problems with either may compromise the validation exercise. Previous studies have compared the effects of different climate inputs on agricultural projections but either incompletely or without a ground truth of observed yields that would allow distinguishing errors due to climate inputs from those intrinsic to the crop model. This study is a systematic evaluation of the reliability of a widely used crop model for simulating U.S. maize yields when driven by multiple observational data products. The parallelized Decision Support System for Agrotechnology Transfer (pDSSAT) is driven with climate inputs from multiple sources reanalysis, reanalysis that is bias corrected with observed climate, and a control dataset and compared with observed historical yields. The simulations show that model output is more accurate when driven by any observation-based precipitation product than when driven by non-bias-corrected reanalysis. The simulations also suggest, in contrast to previous studies, that biased precipitation distribution is significant for yields only in arid regions. Some issues persist for all choices of climate inputs: crop yields appear to be oversensitive to precipitation fluctuations but under sensitive to floods and heat waves. These results suggest that the most important issue for agricultural projections may be not climate inputs but structural limitations in the crop models themselves.

  2. Evaluating the sensitivity of agricultural model performance to different climate inputs

    PubMed Central

    Glotter, Michael J.; Moyer, Elisabeth J.; Ruane, Alex C.; Elliott, Joshua W.

    2017-01-01

    Projections of future food production necessarily rely on models, which must themselves be validated through historical assessments comparing modeled to observed yields. Reliable historical validation requires both accurate agricultural models and accurate climate inputs. Problems with either may compromise the validation exercise. Previous studies have compared the effects of different climate inputs on agricultural projections, but either incompletely or without a ground truth of observed yields that would allow distinguishing errors due to climate inputs from those intrinsic to the crop model. This study is a systematic evaluation of the reliability of a widely-used crop model for simulating U.S. maize yields when driven by multiple observational data products. The parallelized Decision Support System for Agrotechnology Transfer (pDSSAT) is driven with climate inputs from multiple sources – reanalysis, reanalysis bias-corrected with observed climate, and a control dataset – and compared to observed historical yields. The simulations show that model output is more accurate when driven by any observation-based precipitation product than when driven by un-bias-corrected reanalysis. The simulations also suggest, in contrast to previous studies, that biased precipitation distribution is significant for yields only in arid regions. However, some issues persist for all choices of climate inputs: crop yields appear oversensitive to precipitation fluctuations but undersensitive to floods and heat waves. These results suggest that the most important issue for agricultural projections may be not climate inputs but structural limitations in the crop models themselves. PMID:29097985

  3. Device design and signal processing for multiple-input multiple-output multimode fiber links

    NASA Astrophysics Data System (ADS)

    Appaiah, Kumar; Vishwanath, Sriram; Bank, Seth R.

    2012-01-01

    Multimode fibers (MMFs) are limited in data rate capabilities owing to modal dispersion. However, their large core diameter simplifies alignment and packaging, and makes them attractive for short and medium length links. Recent research has shown that the use of signal processing and techniques such as multiple-input multiple-output (MIMO) can greatly improve the data rate capabilities of multimode fibers. In this paper, we review recent experimental work using MIMO and signal processing for multimode fibers, and the improvements in data rates achievable with these techniques. We then present models to design as well as simulate the performance benefits obtainable with arrays of lasers and detectors in conjunction with MIMO, using channel capacity as the metric to optimize. We also discuss some aspects related to complexity of the algorithms needed for signal processing and discuss techniques for low complexity implementation.

  4. A Multifactor Approach to Research in Instructional Technology.

    ERIC Educational Resources Information Center

    Ragan, Tillman J.

    In a field such as instructional design, explanations of educational outcomes must necessarily consider multiple input variables. To adequately understand the contribution made by the independent variables, it is helpful to have a visual conception of how the input variables interrelate. Two variable models are adequately represented by a two…

  5. Production Functions for Water Delivery Systems: Analysis and Estimation Using Dual Cost Function and Implicit Price Specifications

    NASA Astrophysics Data System (ADS)

    Teeples, Ronald; Glyer, David

    1987-05-01

    Both policy and technical analysis of water delivery systems have been based on cost functions that are inconsistent with or are incomplete representations of the neoclassical production functions of economics. We present a full-featured production function model of water delivery which can be estimated from a multiproduct, dual cost function. The model features implicit prices for own-water inputs and is implemented as a jointly estimated system of input share equations and a translog cost function. Likelihood ratio tests are performed showing that a minimally constrained, full-featured production function is a necessary specification of the water delivery operations in our sample. This, plus the model's highly efficient and economically correct parameter estimates, confirms the usefulness of a production function approach to modeling the economic activities of water delivery systems.

  6. Convergence and objective functions of some fault/noise-injection-based online learning algorithms for RBF networks.

    PubMed

    Ho, Kevin I-J; Leung, Chi-Sing; Sum, John

    2010-06-01

    In the last two decades, many online fault/noise injection algorithms have been developed to attain a fault tolerant neural network. However, not much theoretical works related to their convergence and objective functions have been reported. This paper studies six common fault/noise-injection-based online learning algorithms for radial basis function (RBF) networks, namely 1) injecting additive input noise, 2) injecting additive/multiplicative weight noise, 3) injecting multiplicative node noise, 4) injecting multiweight fault (random disconnection of weights), 5) injecting multinode fault during training, and 6) weight decay with injecting multinode fault. Based on the Gladyshev theorem, we show that the convergence of these six online algorithms is almost sure. Moreover, their true objective functions being minimized are derived. For injecting additive input noise during training, the objective function is identical to that of the Tikhonov regularizer approach. For injecting additive/multiplicative weight noise during training, the objective function is the simple mean square training error. Thus, injecting additive/multiplicative weight noise during training cannot improve the fault tolerance of an RBF network. Similar to injective additive input noise, the objective functions of other fault/noise-injection-based online algorithms contain a mean square error term and a specialized regularization term.

  7. Unsupervised texture image segmentation by improved neural network ART2

    NASA Technical Reports Server (NTRS)

    Wang, Zhiling; Labini, G. Sylos; Mugnuolo, R.; Desario, Marco

    1994-01-01

    We here propose a segmentation algorithm of texture image for a computer vision system on a space robot. An improved adaptive resonance theory (ART2) for analog input patterns is adapted to classify the image based on a set of texture image features extracted by a fast spatial gray level dependence method (SGLDM). The nonlinear thresholding functions in input layer of the neural network have been constructed by two parts: firstly, to reduce the effects of image noises on the features, a set of sigmoid functions is chosen depending on the types of the feature; secondly, to enhance the contrast of the features, we adopt fuzzy mapping functions. The cluster number in output layer can be increased by an autogrowing mechanism constantly when a new pattern happens. Experimental results and original or segmented pictures are shown, including the comparison between this approach and K-means algorithm. The system written in C language is performed on a SUN-4/330 sparc-station with an image board IT-150 and a CCD camera.

  8. Portable data collection device with self identifying probe

    DOEpatents

    French, P.D.

    1998-11-17

    The present invention provides a portable data collection device that has a variety of sensors that are interchangeable with a variety of input ports in the device. The various sensors include a data identification feature that provides information to the device regarding the type of physical data produced by each sensor and therefore the type of sensor itself. The data identification feature enables the device to locate the input port where the sensor is connected and self adjust when a sensor is removed or replaced. The device is able to collect physical data, whether or not a function of time. The sensor may also store a unique sensor identifier. 13 figs.

  9. Portable data collection device with self identifying probe

    DOEpatents

    French, Patrick D.

    1998-01-01

    The present invention provides a portable data collection device that has a variety of sensors that are interchangeable with a variety of input ports in the device. The various sensors include a data identification feature that provides information to the device regarding the type of physical data produced by each sensor and therefore the type of sensor itself. The data identification feature enables the device to locate the input port where the sensor is connected and self adjust when a sensor is removed or replaced. The device is able to collect physical data, whether or not a function of time. The sensor may also store a unique sensor identifier.

  10. Contributions of Rod and Cone Pathways to Retinal Direction Selectivity Through Development

    PubMed Central

    Rosa, Juliana M.; Morrie, Ryan D.; Baertsch, Hans C.

    2016-01-01

    Direction selectivity is a robust computation across a broad stimulus space that is mediated by activity of both rod and cone photoreceptors through the ON and OFF pathways. However, rods, S-cones, and M-cones activate the ON and OFF circuits via distinct pathways and the relative contribution of each to direction selectivity is unknown. Using a variety of stimulation paradigms, pharmacological agents, and knockout mice that lack rod transduction, we found that inputs from the ON pathway were critical for strong direction-selective (DS) tuning in the OFF pathway. For UV light stimulation, the ON pathway inputs to the OFF pathway originated with rod signaling, whereas for visible stimulation, the ON pathway inputs to the OFF pathway originated with both rod and M-cone signaling. Whole-cell voltage-clamp recordings revealed that blocking the ON pathway reduced directional tuning in the OFF pathway via a reduction in null-side inhibition, which is provided by OFF starburst amacrine cells (SACs). Consistent with this, our recordings from OFF SACs confirmed that signals originating in the ON pathway contribute to their excitation. Finally, we observed that, for UV stimulation, ON contributions to OFF DS tuning matured earlier than direct signaling via the OFF pathway. These data indicate that the retina uses multiple strategies for computing DS responses across different colors and stages of development. SIGNIFICANCE STATEMENT The retina uses parallel pathways to encode different features of the visual scene. In some cases, these distinct pathways converge on circuits that mediate a distinct computation. For example, rod and cone pathways enable direction-selective (DS) ganglion cells to encode motion over a wide range of light intensities. Here, we show that although direction selectivity is robust across light intensities, motion discrimination for OFF signals is dependent upon ON signaling. At eye opening, ON directional tuning is mature, whereas OFF DS tuning is significantly reduced due to a delayed maturation of S-cone to OFF cone bipolar signaling. These results provide evidence that the retina uses multiple strategies for computing DS responses across different stimulus conditions. PMID:27629718

  11. Quantification of Degeneracy in Biological Systems for Characterization of Functional Interactions Between Modules

    PubMed Central

    Li, Yao; Dwivedi, Gaurav; Huang, Wen; Yi, Yingfei

    2012-01-01

    There is an evolutionary advantage in having multiple components with overlapping functionality (i.e degeneracy) in organisms. While theoretical considerations of degeneracy have been well established in neural networks using information theory, the same concepts have not been developed for differential systems, which form the basis of many biochemical reaction network descriptions in systems biology. Here we establish mathematical definitions of degeneracy, complexity and robustness that allow for the quantification of these properties in a system. By exciting a dynamical system with noise, the mutual information associated with a selected observable output and the interacting subspaces of input components can be used to define both complexity and degeneracy. The calculation of degeneracy in a biological network is a useful metric for evaluating features such as the sensitivity of a biological network to environmental evolutionary pressure. Using a two-receptor signal transduction network, we find that redundant components will not yield high degeneracy whereas compensatory mechanisms established by pathway crosstalk will. This form of analysis permits interrogation of large-scale differential systems for non-identical, functionally equivalent features that have evolved to maintain homeostasis during disruption of individual components. PMID:22619750

  12. The development of a clinical outcomes survey research application: Assessment Center.

    PubMed

    Gershon, Richard; Rothrock, Nan E; Hanrahan, Rachel T; Jansky, Liz J; Harniss, Mark; Riley, William

    2010-06-01

    The National Institutes of Health sponsored Patient-Reported Outcome Measurement Information System (PROMIS) aimed to create item banks and computerized adaptive tests (CATs) across multiple domains for individuals with a range of chronic diseases. Web-based software was created to enable a researcher to create study-specific Websites that could administer PROMIS CATs and other instruments to research participants or clinical samples. This paper outlines the process used to develop a user-friendly, free, Web-based resource (Assessment Center) for storage, retrieval, organization, sharing, and administration of patient-reported outcomes (PRO) instruments. Joint Application Design (JAD) sessions were conducted with representatives from numerous institutions in order to supply a general wish list of features. Use Cases were then written to ensure that end user expectations matched programmer specifications. Program development included daily programmer "scrum" sessions, weekly Usability Acceptability Testing (UAT) and continuous Quality Assurance (QA) activities pre- and post-release. Assessment Center includes features that promote instrument development including item histories, data management, and storage of statistical analysis results. This case study of software development highlights the collection and incorporation of user input throughout the development process. Potential future applications of Assessment Center in clinical research are discussed.

  13. Computer program for plotting and fairing wind-tunnel data

    NASA Technical Reports Server (NTRS)

    Morgan, H. L., Jr.

    1983-01-01

    A detailed description of the Langley computer program PLOTWD which plots and fairs experimental wind-tunnel data is presented. The program was written for use primarily on the Langley CDC computer and CALCOMP plotters. The fundamental operating features of the program are that the input data are read and written to a random-access file for use during program execution, that the data for a selected run can be sorted and edited to delete duplicate points, and that the data can be plotted and faired using tension splines, least-squares polynomial, or least-squares cubic-spline curves. The most noteworthy feature of the program is the simplicity of the user-supplied input requirements. Several subroutines are also included that can be used to draw grid lines, zero lines, axis scale values and lables, and legends. A detailed description of the program operational features and each sub-program are presented. The general application of the program is also discussed together with the input and output for two typical plot types. A listing of the program code, user-guide, and output description are presented in appendices. The program has been in use at Langley for several years and has proven to be both easy to use and versatile.

  14. Wood texture classification by fuzzy neural networks

    NASA Astrophysics Data System (ADS)

    Gonzaga, Adilson; de Franca, Celso A.; Frere, Annie F.

    1999-03-01

    The majority of scientific papers focusing on wood classification for pencil manufacturing take into account defects and visual appearance. Traditional methodologies are base don texture analysis by co-occurrence matrix, by image modeling, or by tonal measures over the plate surface. In this work, we propose to classify plates of wood without biological defects like insect holes, nodes, and cracks, by analyzing their texture. By this methodology we divide the plate image in several rectangular windows or local areas and reduce the number of gray levels. From each local area, we compute the histogram of difference sand extract texture features, given them as input to a Local Neuro-Fuzzy Network. Those features are from the histogram of differences instead of the image pixels due to their better performance and illumination independence. Among several features like media, contrast, second moment, entropy, and IDN, the last three ones have showed better results for network training. Each LNN output is taken as input to a Partial Neuro-Fuzzy Network (PNFN) classifying a pencil region on the plate. At last, the outputs from the PNFN are taken as input to a Global Fuzzy Logic doing the plate classification. Each pencil classification within the plate is done taking into account each quality index.

  15. A Procedure for Extending Input Selection Algorithms to Low Quality Data in Modelling Problems with Application to the Automatic Grading of Uploaded Assignments

    PubMed Central

    Otero, José; Palacios, Ana; Suárez, Rosario; Junco, Luis

    2014-01-01

    When selecting relevant inputs in modeling problems with low quality data, the ranking of the most informative inputs is also uncertain. In this paper, this issue is addressed through a new procedure that allows the extending of different crisp feature selection algorithms to vague data. The partial knowledge about the ordinal of each feature is modelled by means of a possibility distribution, and a ranking is hereby applied to sort these distributions. It will be shown that this technique makes the most use of the available information in some vague datasets. The approach is demonstrated in a real-world application. In the context of massive online computer science courses, methods are sought for automatically providing the student with a qualification through code metrics. Feature selection methods are used to find the metrics involved in the most meaningful predictions. In this study, 800 source code files, collected and revised by the authors in classroom Computer Science lectures taught between 2013 and 2014, are analyzed with the proposed technique, and the most relevant metrics for the automatic grading task are discussed. PMID:25114967

  16. Dynamic test input generation for multiple-fault isolation

    NASA Technical Reports Server (NTRS)

    Schaefer, Phil

    1990-01-01

    Recent work is Causal Reasoning has provided practical techniques for multiple fault diagnosis. These techniques provide a hypothesis/measurement diagnosis cycle. Using probabilistic methods, they choose the best measurements to make, then update fault hypotheses in response. For many applications such as computers and spacecraft, few measurement points may be accessible, or values may change quickly as the system under diagnosis operates. In these cases, a hypothesis/measurement cycle is insufficient. A technique is presented for a hypothesis/test-input/measurement diagnosis cycle. In contrast to generating tests a priori for determining device functionality, it dynamically generates tests in response to current knowledge about fault probabilities. It is shown how the mathematics previously used for measurement specification can be applied to the test input generation process. An example from an efficient implementation called Multi-Purpose Causal (MPC) is presented.

  17. Optimal frame-by-frame result combination strategy for OCR in video stream

    NASA Astrophysics Data System (ADS)

    Bulatov, Konstantin; Lynchenko, Aleksander; Krivtsov, Valeriy

    2018-04-01

    This paper describes the problem of combining classification results of multiple observations of one object. This task can be regarded as a particular case of a decision-making using a combination of experts votes with calculated weights. The accuracy of various methods of combining the classification results depending on different models of input data is investigated on the example of frame-by-frame character recognition in a video stream. Experimentally it is shown that the strategy of choosing a single most competent expert in case of input data without irrelevant observations has an advantage (in this case irrelevant means with character localization and segmentation errors). At the same time this work demonstrates the advantage of combining several most competent experts according to multiplication rule or voting if irrelevant samples are present in the input data.

  18. Representation learning: a unified deep learning framework for automatic prostate MR segmentation.

    PubMed

    Liao, Shu; Gao, Yaozong; Oto, Aytekin; Shen, Dinggang

    2013-01-01

    Image representation plays an important role in medical image analysis. The key to the success of different medical image analysis algorithms is heavily dependent on how we represent the input data, namely features used to characterize the input image. In the literature, feature engineering remains as an active research topic, and many novel hand-crafted features are designed such as Haar wavelet, histogram of oriented gradient, and local binary patterns. However, such features are not designed with the guidance of the underlying dataset at hand. To this end, we argue that the most effective features should be designed in a learning based manner, namely representation learning, which can be adapted to different patient datasets at hand. In this paper, we introduce a deep learning framework to achieve this goal. Specifically, a stacked independent subspace analysis (ISA) network is adopted to learn the most effective features in a hierarchical and unsupervised manner. The learnt features are adapted to the dataset at hand and encode high level semantic anatomical information. The proposed method is evaluated on the application of automatic prostate MR segmentation. Experimental results show that significant segmentation accuracy improvement can be achieved by the proposed deep learning method compared to other state-of-the-art segmentation approaches.

  19. Low Earth Orbit satellite traffic simulator

    NASA Technical Reports Server (NTRS)

    Hoelzel, John

    1995-01-01

    This paper describes a significant tool for Low Earth Orbit (LEO) capacity analysis, needed to support marketing, economic, and design analysis, known as a Satellite Traffic Simulator (STS). LEO satellites typically use multiple beams to help achieve the desired communication capacity, but the traffic demand in these beams in usually not uniform. Simulations of dynamic, average, and peak expected demand per beam is a very critical part of the marketing, economic, and design analysis necessary to field a viable LEO system. An STS is described in this paper which can simulate voice, data and FAX traffic carried by LEO satellite beams and Earth Station Gateways. It is applicable world-wide for any LEO satellite constellations operating over any regions. For aeronautical applications to LEO satellites. the anticipates aeronautical traffic (Erlangs for each hour of the day to be simulated) is prepared for geographically defined 'area targets' (each major operational region for the respective aircraft), and used as input to the STS. The STS was designed by Constellations Communications Inc. (CCI) and E-Systems for usage in Brazil in accordance with an ESCA/INPE Statement Of Work, and developed by Analytical Graphics Inc. (AGI) to execute on top of its Satellite Tool Kit (STK) commercial software. The STS simulates constellations of LEO satellite orbits, with input of traffic intensity (Erlangs) for each hour of the day generated from area targets (such as Brazilian States). accumulated in custom LEO satellite beams, and then accumulated in Earth Station Gateways. The STS is a very general simulator which can accommodate: many forms of orbital element and Walker Constellation input; simple beams or any user defined custom beams; and any location of Gateways. The paper describes some of these features, including Manual Mode dynamic graphical display of communication links, to illustrate which Gateway links are accessible and which links are not, at each 'step' of the satellite orbit. In the two Performance Modes, either Channel capacity or Grade Of Service (GOS) for objects (Satellite beams, Gateways, and an entire satellite) are computed respectively by standard traffic table capacity lookup and blocking probability equations. GOS can be input, with number of channels calculated, or number of channels can be input, with GOS calculated. Also described are some of the STS Test Procedure approach and results. AGI plans to make the STS features available through their normal commercial STK products. E-Systems is a co-developer, tester, and user of the STS. The Test Procedure for the STS was prepared by E-Systems, as an independent tester for CCI, to support the CCI delivery of the STS to ESCA, for their customer INPE.

  20. Effect of Heat Input on Inclusion Evolution Behavior in Heat-Affected Zone of EH36 Shipbuilding Steel

    NASA Astrophysics Data System (ADS)

    Sun, Jincheng; Zou, Xiaodong; Matsuura, Hiroyuki; Wang, Cong

    2018-03-01

    The effects of heat input parameters on inclusion and microstructure characteristics have been investigated using welding thermal simulations. Inclusion features from heat-affected zones (HAZs) were profiled. It was found that, under heat input of 120 kJ/cm, Al-Mg-Ti-O-(Mn-S) composite inclusions can act effectively as nucleation sites for acicular ferrites. However, this ability disappears when the heat input is increased to 210 kJ/cm. In addition, confocal scanning laser microscopy (CSLM) was used to document possible inclusion-microstructure interactions, shedding light on how inclusions assist beneficial transformations toward property enhancement.

  1. Easy boundary definition for EGUN

    NASA Astrophysics Data System (ADS)

    Becker, R.

    1989-06-01

    The relativistic electron optics program EGUN [1] has reached a broad distribution, and many users have asked for an easier way of boundary input. A preprocessor to EGUN has been developed that accepts polygonal input of boundary points, and offers features such as rounding off of corners, shifting and squeezing of electrodes and simple input of slanted Neumann boundaries. This preprocessor can either be used on a PC that is linked to a mainframe using the FORTRAN version of EGUN, or in connection with the version EGNc, which also runs on a PC. In any case, direct graphic response on the PC greatly facilitates the creation of correct input files for EGUN.

  2. Robust recognition of handwritten numerals based on dual cooperative network

    NASA Technical Reports Server (NTRS)

    Lee, Sukhan; Choi, Yeongwoo

    1992-01-01

    An approach to robust recognition of handwritten numerals using two operating parallel networks is presented. The first network uses inputs in Cartesian coordinates, and the second network uses the same inputs transformed into polar coordinates. How the proposed approach realizes the robustness to local and global variations of input numerals by handling inputs both in Cartesian coordinates and in its transformed Polar coordinates is described. The required network structures and its learning scheme are discussed. Experimental results show that by tracking only a small number of distinctive features for each teaching numeral in each coordinate, the proposed system can provide robust recognition of handwritten numerals.

  3. Effect of Heat Input on Inclusion Evolution Behavior in Heat-Affected Zone of EH36 Shipbuilding Steel

    NASA Astrophysics Data System (ADS)

    Sun, Jincheng; Zou, Xiaodong; Matsuura, Hiroyuki; Wang, Cong

    2018-06-01

    The effects of heat input parameters on inclusion and microstructure characteristics have been investigated using welding thermal simulations. Inclusion features from heat-affected zones (HAZs) were profiled. It was found that, under heat input of 120 kJ/cm, Al-Mg-Ti-O-(Mn-S) composite inclusions can act effectively as nucleation sites for acicular ferrites. However, this ability disappears when the heat input is increased to 210 kJ/cm. In addition, confocal scanning laser microscopy (CSLM) was used to document possible inclusion-microstructure interactions, shedding light on how inclusions assist beneficial transformations toward property enhancement.

  4. Optimal control of LQR for discrete time-varying systems with input delays

    NASA Astrophysics Data System (ADS)

    Yin, Yue-Zhu; Yang, Zhong-Lian; Yin, Zhi-Xiang; Xu, Feng

    2018-04-01

    In this work, we consider the optimal control problem of linear quadratic regulation for discrete time-variant systems with single input and multiple input delays. An innovative and simple method to derive the optimal controller is given. The studied problem is first equivalently converted into a problem subject to a constraint condition. Last, with the established duality, the problem is transformed into a static mathematical optimisation problem without input delays. The optimal control input solution to minimise performance index function is derived by solving this optimisation problem with two methods. A numerical simulation example is carried out and its results show that our two approaches are both feasible and very effective.

  5. Academic Language in Shared Book Reading: Parent and Teacher Input to Mono- and Bilingual Preschoolers

    ERIC Educational Resources Information Center

    Aarts, Rian; Demir-Vegter, Serpil; Kurvers, Jeanne; Henrichs, Lotte

    2016-01-01

    The current study examined academic language (AL) input of mothers and teachers to 15 monolingual Dutch and 15 bilingual Turkish-Dutch 4- to 6-year-old children and its relationships with the children's language development. At two times, shared book reading was videotaped and analyzed for academic features: lexical diversity, syntactic…

  6. Sequential and Mixed Genetic Algorithm and Learning Automata (SGALA, MGALA) for Feature Selection in QSAR

    PubMed Central

    MotieGhader, Habib; Gharaghani, Sajjad; Masoudi-Sobhanzadeh, Yosef; Masoudi-Nejad, Ali

    2017-01-01

    Feature selection is of great importance in Quantitative Structure-Activity Relationship (QSAR) analysis. This problem has been solved using some meta-heuristic algorithms such as GA, PSO, ACO and so on. In this work two novel hybrid meta-heuristic algorithms i.e. Sequential GA and LA (SGALA) and Mixed GA and LA (MGALA), which are based on Genetic algorithm and learning automata for QSAR feature selection are proposed. SGALA algorithm uses advantages of Genetic algorithm and Learning Automata sequentially and the MGALA algorithm uses advantages of Genetic Algorithm and Learning Automata simultaneously. We applied our proposed algorithms to select the minimum possible number of features from three different datasets and also we observed that the MGALA and SGALA algorithms had the best outcome independently and in average compared to other feature selection algorithms. Through comparison of our proposed algorithms, we deduced that the rate of convergence to optimal result in MGALA and SGALA algorithms were better than the rate of GA, ACO, PSO and LA algorithms. In the end, the results of GA, ACO, PSO, LA, SGALA, and MGALA algorithms were applied as the input of LS-SVR model and the results from LS-SVR models showed that the LS-SVR model had more predictive ability with the input from SGALA and MGALA algorithms than the input from all other mentioned algorithms. Therefore, the results have corroborated that not only is the predictive efficiency of proposed algorithms better, but their rate of convergence is also superior to the all other mentioned algorithms. PMID:28979308

  7. Sequential and Mixed Genetic Algorithm and Learning Automata (SGALA, MGALA) for Feature Selection in QSAR.

    PubMed

    MotieGhader, Habib; Gharaghani, Sajjad; Masoudi-Sobhanzadeh, Yosef; Masoudi-Nejad, Ali

    2017-01-01

    Feature selection is of great importance in Quantitative Structure-Activity Relationship (QSAR) analysis. This problem has been solved using some meta-heuristic algorithms such as GA, PSO, ACO and so on. In this work two novel hybrid meta-heuristic algorithms i.e. Sequential GA and LA (SGALA) and Mixed GA and LA (MGALA), which are based on Genetic algorithm and learning automata for QSAR feature selection are proposed. SGALA algorithm uses advantages of Genetic algorithm and Learning Automata sequentially and the MGALA algorithm uses advantages of Genetic Algorithm and Learning Automata simultaneously. We applied our proposed algorithms to select the minimum possible number of features from three different datasets and also we observed that the MGALA and SGALA algorithms had the best outcome independently and in average compared to other feature selection algorithms. Through comparison of our proposed algorithms, we deduced that the rate of convergence to optimal result in MGALA and SGALA algorithms were better than the rate of GA, ACO, PSO and LA algorithms. In the end, the results of GA, ACO, PSO, LA, SGALA, and MGALA algorithms were applied as the input of LS-SVR model and the results from LS-SVR models showed that the LS-SVR model had more predictive ability with the input from SGALA and MGALA algorithms than the input from all other mentioned algorithms. Therefore, the results have corroborated that not only is the predictive efficiency of proposed algorithms better, but their rate of convergence is also superior to the all other mentioned algorithms.

  8. A consensual neural network

    NASA Technical Reports Server (NTRS)

    Benediktsson, J. A.; Ersoy, O. K.; Swain, P. H.

    1991-01-01

    A neural network architecture called a consensual neural network (CNN) is proposed for the classification of data from multiple sources. Its relation to hierarchical and ensemble neural networks is discussed. CNN is based on the statistical consensus theory and uses nonlinearly transformed input data. The input data are transformed several times, and the different transformed data are applied as if they were independent inputs. The independent inputs are classified using stage neural networks and outputs from the stage networks are then weighted and combined to make a decision. Experimental results based on remote-sensing data and geographic data are given.

  9. Machinery running state identification based on discriminant semi-supervised local tangent space alignment for feature fusion and extraction

    NASA Astrophysics Data System (ADS)

    Su, Zuqiang; Xiao, Hong; Zhang, Yi; Tang, Baoping; Jiang, Yonghua

    2017-04-01

    Extraction of sensitive features is a challenging but key task in data-driven machinery running state identification. Aimed at solving this problem, a method for machinery running state identification that applies discriminant semi-supervised local tangent space alignment (DSS-LTSA) for feature fusion and extraction is proposed. Firstly, in order to extract more distinct features, the vibration signals are decomposed by wavelet packet decomposition WPD, and a mixed-domain feature set consisted of statistical features, autoregressive (AR) model coefficients, instantaneous amplitude Shannon entropy and WPD energy spectrum is extracted to comprehensively characterize the properties of machinery running state(s). Then, the mixed-dimension feature set is inputted into DSS-LTSA for feature fusion and extraction to eliminate redundant information and interference noise. The proposed DSS-LTSA can extract intrinsic structure information of both labeled and unlabeled state samples, and as a result the over-fitting problem of supervised manifold learning and blindness problem of unsupervised manifold learning are overcome. Simultaneously, class discrimination information is integrated within the dimension reduction process in a semi-supervised manner to improve sensitivity of the extracted fusion features. Lastly, the extracted fusion features are inputted into a pattern recognition algorithm to achieve the running state identification. The effectiveness of the proposed method is verified by a running state identification case in a gearbox, and the results confirm the improved accuracy of the running state identification.

  10. Robust control of a parallel hybrid drivetrain with a CVT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mayer, T.; Schroeder, D.

    1996-09-01

    In this paper the design of a robust control system for a parallel hybrid drivetrain is presented. The drivetrain is based on a continuously variable transmission (CVT) and is therefore a highly nonlinear multiple-input-multiple-output system (MIMO-System). Input-Output-Linearization offers the possibility of linearizing and of decoupling the system. Since for example the vehicle mass varies with the load and the efficiency of the gearbox depends strongly on the actual working point, an exact linearization of the plant will mostly fail. Therefore a robust control algorithm based on sliding mode is used to control the drivetrain.

  11. Simultaneous Excitation of Multiple-Input Multiple-Output CFD-Based Unsteady Aerodynamic Systems

    NASA Technical Reports Server (NTRS)

    Silva, Walter A.

    2008-01-01

    A significant improvement to the development of CFD-based unsteady aerodynamic reduced-order models (ROMs) is presented. This improvement involves the simultaneous excitation of the structural modes of the CFD-based unsteady aerodynamic system that enables the computation of the unsteady aerodynamic state-space model using a single CFD execution, independent of the number of structural modes. Four different types of inputs are presented that can be used for the simultaneous excitation of the structural modes. Results are presented for a flexible, supersonic semi-span configuration using the CFL3Dv6.4 code.

  12. Simultaneous Excitation of Multiple-Input Multiple-Output CFD-Based Unsteady Aerodynamic Systems

    NASA Technical Reports Server (NTRS)

    Silva, Walter A.

    2007-01-01

    A significant improvement to the development of CFD-based unsteady aerodynamic reduced-order models (ROMs) is presented. This improvement involves the simultaneous excitation of the structural modes of the CFD-based unsteady aerodynamic system that enables the computation of the unsteady aerodynamic state-space model using a single CFD execution, independent of the number of structural modes. Four different types of inputs are presented that can be used for the simultaneous excitation of the structural modes. Results are presented for a flexible, supersonic semi-span configuration using the CFL3Dv6.4 code.

  13. Deep Visual Attention Prediction

    NASA Astrophysics Data System (ADS)

    Wang, Wenguan; Shen, Jianbing

    2018-05-01

    In this work, we aim to predict human eye fixation with view-free scenes based on an end-to-end deep learning architecture. Although Convolutional Neural Networks (CNNs) have made substantial improvement on human attention prediction, it is still needed to improve CNN based attention models by efficiently leveraging multi-scale features. Our visual attention network is proposed to capture hierarchical saliency information from deep, coarse layers with global saliency information to shallow, fine layers with local saliency response. Our model is based on a skip-layer network structure, which predicts human attention from multiple convolutional layers with various reception fields. Final saliency prediction is achieved via the cooperation of those global and local predictions. Our model is learned in a deep supervision manner, where supervision is directly fed into multi-level layers, instead of previous approaches of providing supervision only at the output layer and propagating this supervision back to earlier layers. Our model thus incorporates multi-level saliency predictions within a single network, which significantly decreases the redundancy of previous approaches of learning multiple network streams with different input scales. Extensive experimental analysis on various challenging benchmark datasets demonstrate our method yields state-of-the-art performance with competitive inference time.

  14. Predicting ovarian malignancy: application of artificial neural networks to transvaginal and color Doppler flow US.

    PubMed

    Biagiotti, R; Desii, C; Vanzi, E; Gacci, G

    1999-02-01

    To compare the performance of artificial neural networks (ANNs) with that of multiple logistic regression (MLR) models for predicting ovarian malignancy in patients with adnexal masses by using transvaginal B-mode and color Doppler flow ultrasonography (US). A total of 226 adnexal masses were examined before surgery: Fifty-one were malignant and 175 were benign. The data were divided into training and testing subsets by using a "leave n out method." The training subsets were used to compute the optimum MLR equations and to train the ANNs. The cross-validation subsets were used to estimate the performance of each of the two models in predicting ovarian malignancy. At testing, three-layer back-propagation networks, based on the same input variables selected by using MLR (i.e., women's ages, papillary projections, random echogenicity, peak systolic velocity, and resistance index), had a significantly higher sensitivity than did MLR (96% vs 84%; McNemar test, p = .04). The Brier scores for ANNs were significantly lower than those calculated for MLR (Student t test for paired samples, P = .004). ANNs might have potential for categorizing adnexal masses as either malignant or benign on the basis of multiple variables related to demographic and US features.

  15. Indoor imagery with a 3D through-wall synthetic aperture radar

    NASA Astrophysics Data System (ADS)

    Sévigny, Pascale; DiFilippo, David J.; Laneve, Tony; Fournier, Jonathan

    2012-06-01

    Through-wall radar imaging is an emerging technology with great interest to military and police forces operating in an urban environment. A through-wall imaging radar can potentially provide interior room layouts as well as detection and localization of targets of interest within a building. In this paper, we present our through-wall radar system mounted on the side of a vehicle and driven along a path in front of a building of interest. The vehicle is equipped with a LIDAR (Light Detection and Ranging) and motion sensors that provide auxiliary information. The radar uses an ultra wideband frequency-modulated continuous wave (FMCW) waveform to obtain high range resolution. Our system is composed of a vertical linear receive array to discriminate targets in elevation, and two transmit elements operated in a slow multiple-input multiple output (MIMO) configuration to increase the achievable elevation resolution. High resolution in the along-track direction is obtained through synthetic aperture radar (SAR) techniques. We present experimental results that demonstrate the 3-D capability of the radar. We further demonstrate target detection behind challenging walls, and imagery of internal wall features. Finally, we discuss future work.

  16. Multifarious Roles of Intrinsic Disorder in Proteins Illustrate Its Broad Impact on Plant Biology

    PubMed Central

    Sun, Xiaolin; Rikkerink, Erik H.A.; Jones, William T.; Uversky, Vladimir N.

    2013-01-01

    Intrinsically disordered proteins (IDPs) are highly abundant in eukaryotic proteomes. Plant IDPs play critical roles in plant biology and often act as integrators of signals from multiple plant regulatory and environmental inputs. Binding promiscuity and plasticity allow IDPs to interact with multiple partners in protein interaction networks and provide important functional advantages in molecular recognition through transient protein–protein interactions. Short interaction-prone segments within IDPs, termed molecular recognition features, represent potential binding sites that can undergo disorder-to-order transition upon binding to their partners. In this review, we summarize the evidence for the importance of IDPs in plant biology and evaluate the functions associated with intrinsic disorder in five different types of plant protein families experimentally confirmed as IDPs. Functional studies of these proteins illustrate the broad impact of disorder on many areas of plant biology, including abiotic stress, transcriptional regulation, light perception, and development. Based on the roles of disorder in the protein–protein interactions, we propose various modes of action for plant IDPs that may provide insight for future experimental approaches aimed at understanding the molecular basis of protein function within important plant pathways. PMID:23362206

  17. Regression analysis for LED color detection of visual-MIMO system

    NASA Astrophysics Data System (ADS)

    Banik, Partha Pratim; Saha, Rappy; Kim, Ki-Doo

    2018-04-01

    Color detection from a light emitting diode (LED) array using a smartphone camera is very difficult in a visual multiple-input multiple-output (visual-MIMO) system. In this paper, we propose a method to determine the LED color using a smartphone camera by applying regression analysis. We employ a multivariate regression model to identify the LED color. After taking a picture of an LED array, we select the LED array region, and detect the LED using an image processing algorithm. We then apply the k-means clustering algorithm to determine the number of potential colors for feature extraction of each LED. Finally, we apply the multivariate regression model to predict the color of the transmitted LEDs. In this paper, we show our results for three types of environmental light condition: room environmental light, low environmental light (560 lux), and strong environmental light (2450 lux). We compare the results of our proposed algorithm from the analysis of training and test R-Square (%) values, percentage of closeness of transmitted and predicted colors, and we also mention about the number of distorted test data points from the analysis of distortion bar graph in CIE1931 color space.

  18. Topological Interference Management for K-User Downlink Massive MIMO Relay Network Channel.

    PubMed

    Selvaprabhu, Poongundran; Chinnadurai, Sunil; Li, Jun; Lee, Moon Ho

    2017-08-17

    In this paper, we study the emergence of topological interference alignment and the characterizing features of a multi-user broadcast interference relay channel. We propose an alternative transmission strategy named the relay space-time interference alignment (R-STIA) technique, in which a K -user multiple-input-multiple-output (MIMO) interference channel has massive antennas at the transmitter and relay. Severe interference from unknown transmitters affects the downlink relay network channel and degrades the system performance. An additional (unintended) receiver is introduced in the proposed R-STIA technique to overcome the above problem, since it has the ability to decode the desired signals for the intended receiver by considering cooperation between the receivers. The additional receiver also helps in recovering and reconstructing the interference signals with limited channel state information at the relay (CSIR). The Alamouti space-time transmission technique and minimum mean square error (MMSE) linear precoder are also used in the proposed scheme to detect the presence of interference signals. Numerical results show that the proposed R-STIA technique achieves a better performance in terms of the bit error rate (BER) and sum-rate compared to the existing broadcast channel schemes.

  19. Topological Interference Management for K-User Downlink Massive MIMO Relay Network Channel

    PubMed Central

    Li, Jun; Lee, Moon Ho

    2017-01-01

    In this paper, we study the emergence of topological interference alignment and the characterizing features of a multi-user broadcast interference relay channel. We propose an alternative transmission strategy named the relay space-time interference alignment (R-STIA) technique, in which a K-user multiple-input-multiple-output (MIMO) interference channel has massive antennas at the transmitter and relay. Severe interference from unknown transmitters affects the downlink relay network channel and degrades the system performance. An additional (unintended) receiver is introduced in the proposed R-STIA technique to overcome the above problem, since it has the ability to decode the desired signals for the intended receiver by considering cooperation between the receivers. The additional receiver also helps in recovering and reconstructing the interference signals with limited channel state information at the relay (CSIR). The Alamouti space-time transmission technique and minimum mean square error (MMSE) linear precoder are also used in the proposed scheme to detect the presence of interference signals. Numerical results show that the proposed R-STIA technique achieves a better performance in terms of the bit error rate (BER) and sum-rate compared to the existing broadcast channel schemes. PMID:28817071

  20. Adaptive fusion of infrared and visible images in dynamic scene

    NASA Astrophysics Data System (ADS)

    Yang, Guang; Yin, Yafeng; Man, Hong; Desai, Sachi

    2011-11-01

    Multiple modalities sensor fusion has been widely employed in various surveillance and military applications. A variety of image fusion techniques including PCA, wavelet, curvelet and HSV has been proposed in recent years to improve human visual perception for object detection. One of the main challenges for visible and infrared image fusion is to automatically determine an optimal fusion strategy for different input scenes along with an acceptable computational cost. This paper, we propose a fast and adaptive feature selection based image fusion method to obtain high a contrast image from visible and infrared sensors for targets detection. At first, fuzzy c-means clustering is applied on the infrared image to highlight possible hotspot regions, which will be considered as potential targets' locations. After that, the region surrounding the target area is segmented as the background regions. Then image fusion is locally applied on the selected target and background regions by computing different linear combination of color components from registered visible and infrared images. After obtaining different fused images, histogram distributions are computed on these local fusion images as the fusion feature set. The variance ratio which is based on Linear Discriminative Analysis (LDA) measure is employed to sort the feature set and the most discriminative one is selected for the whole image fusion. As the feature selection is performed over time, the process will dynamically determine the most suitable feature for the image fusion in different scenes. Experiment is conducted on the OSU Color-Thermal database, and TNO Human Factor dataset. The fusion results indicate that our proposed method achieved a competitive performance compared with other fusion algorithms at a relatively low computational cost.

  1. Conditioning 3D object-based models to dense well data

    NASA Astrophysics Data System (ADS)

    Wang, Yimin C.; Pyrcz, Michael J.; Catuneanu, Octavian; Boisvert, Jeff B.

    2018-06-01

    Object-based stochastic simulation models are used to generate categorical variable models with a realistic representation of complicated reservoir heterogeneity. A limitation of object-based modeling is the difficulty of conditioning to dense data. One method to achieve data conditioning is to apply optimization techniques. Optimization algorithms can utilize an objective function measuring the conditioning level of each object while also considering the geological realism of the object. Here, an objective function is optimized with implicit filtering which considers constraints on object parameters. Thousands of objects conditioned to data are generated and stored in a database. A set of objects are selected with linear integer programming to generate the final realization and honor all well data, proportions and other desirable geological features. Although any parameterizable object can be considered, objects from fluvial reservoirs are used to illustrate the ability to simultaneously condition multiple types of geologic features. Channels, levees, crevasse splays and oxbow lakes are parameterized based on location, path, orientation and profile shapes. Functions mimicking natural river sinuosity are used for the centerline model. Channel stacking pattern constraints are also included to enhance the geological realism of object interactions. Spatial layout correlations between different types of objects are modeled. Three case studies demonstrate the flexibility of the proposed optimization-simulation method. These examples include multiple channels with high sinuosity, as well as fragmented channels affected by limited preservation. In all cases the proposed method reproduces input parameters for the object geometries and matches the dense well constraints. The proposed methodology expands the applicability of object-based simulation to complex and heterogeneous geological environments with dense sampling.

  2. Feature-based pairwise retinal image registration by radial distortion correction

    NASA Astrophysics Data System (ADS)

    Lee, Sangyeol; Abràmoff, Michael D.; Reinhardt, Joseph M.

    2007-03-01

    Fundus camera imaging is widely used to document disorders such as diabetic retinopathy and macular degeneration. Multiple retinal images can be combined together through a procedure known as mosaicing to form an image with a larger field of view. Mosaicing typically requires multiple pairwise registrations of partially overlapped images. We describe a new method for pairwise retinal image registration. The proposed method is unique in that the radial distortion due to image acquisition is corrected prior to the geometric transformation. Vessel lines are detected using the Hessian operator and are used as input features to the registration. Since the overlapping region is typically small in a retinal image pair, only a few correspondences are available, thus limiting the applicable model to an afine transform at best. To recover the distortion due to curved-surface of retina and lens optics, a combined approach of an afine model with a radial distortion correction is proposed. The parameters of the image acquisition and radial distortion models are estimated during an optimization step that uses Powell's method driven by the vessel line distance. Experimental results using 20 pairs of green channel images acquired from three subjects with a fundus camera confirmed that the afine model with distortion correction could register retinal image pairs to within 1.88+/-0.35 pixels accuracy (mean +/- standard deviation) assessed by vessel line error, which is 17% better than the afine-only approach. Because the proposed method needs only two correspondences, it can be applied to obtain good registration accuracy even in the case of small overlap between retinal image pairs.

  3. Multilabel learning via random label selection for protein subcellular multilocations prediction.

    PubMed

    Wang, Xiao; Li, Guo-Zheng

    2013-01-01

    Prediction of protein subcellular localization is an important but challenging problem, particularly when proteins may simultaneously exist at, or move between, two or more different subcellular location sites. Most of the existing protein subcellular localization methods are only used to deal with the single-location proteins. In the past few years, only a few methods have been proposed to tackle proteins with multiple locations. However, they only adopt a simple strategy, that is, transforming the multilocation proteins to multiple proteins with single location, which does not take correlations among different subcellular locations into account. In this paper, a novel method named random label selection (RALS) (multilabel learning via RALS), which extends the simple binary relevance (BR) method, is proposed to learn from multilocation proteins in an effective and efficient way. RALS does not explicitly find the correlations among labels, but rather implicitly attempts to learn the label correlations from data by augmenting original feature space with randomly selected labels as its additional input features. Through the fivefold cross-validation test on a benchmark data set, we demonstrate our proposed method with consideration of label correlations obviously outperforms the baseline BR method without consideration of label correlations, indicating correlations among different subcellular locations really exist and contribute to improvement of prediction performance. Experimental results on two benchmark data sets also show that our proposed methods achieve significantly higher performance than some other state-of-the-art methods in predicting subcellular multilocations of proteins. The prediction web server is available at >http://levis.tongji.edu.cn:8080/bioinfo/MLPred-Euk/ for the public usage.

  4. Food Recognition: A New Dataset, Experiments, and Results.

    PubMed

    Ciocca, Gianluigi; Napoletano, Paolo; Schettini, Raimondo

    2017-05-01

    We propose a new dataset for the evaluation of food recognition algorithms that can be used in dietary monitoring applications. Each image depicts a real canteen tray with dishes and foods arranged in different ways. Each tray contains multiple instances of food classes. The dataset contains 1027 canteen trays for a total of 3616 food instances belonging to 73 food classes. The food on the tray images has been manually segmented using carefully drawn polygonal boundaries. We have benchmarked the dataset by designing an automatic tray analysis pipeline that takes a tray image as input, finds the regions of interest, and predicts for each region the corresponding food class. We have experimented with three different classification strategies using also several visual descriptors. We achieve about 79% of food and tray recognition accuracy using convolutional-neural-networks-based features. The dataset, as well as the benchmark framework, are available to the research community.

  5. Automated Cough Assessment on a Mobile Platform

    PubMed Central

    2014-01-01

    The development of an Automated System for Asthma Monitoring (ADAM) is described. This consists of a consumer electronics mobile platform running a custom application. The application acquires an audio signal from an external user-worn microphone connected to the device analog-to-digital converter (microphone input). This signal is processed to determine the presence or absence of cough sounds. Symptom tallies and raw audio waveforms are recorded and made easily accessible for later review by a healthcare provider. The symptom detection algorithm is based upon standard speech recognition and machine learning paradigms and consists of an audio feature extraction step followed by a Hidden Markov Model based Viterbi decoder that has been trained on a large database of audio examples from a variety of subjects. Multiple Hidden Markov Model topologies and orders are studied. Performance of the recognizer is presented in terms of the sensitivity and the rate of false alarm as determined in a cross-validation test. PMID:25506590

  6. Operator based integration of information in multimodal radiological search mission with applications to anomaly detection

    NASA Astrophysics Data System (ADS)

    Benedetto, J.; Cloninger, A.; Czaja, W.; Doster, T.; Kochersberger, K.; Manning, B.; McCullough, T.; McLane, M.

    2014-05-01

    Successful performance of radiological search mission is dependent on effective utilization of mixture of signals. Examples of modalities include, e.g., EO imagery and gamma radiation data, or radiation data collected during multiple events. In addition, elevation data or spatial proximity can be used to enhance the performance of acquisition systems. State of the art techniques in processing and exploitation of complex information manifolds rely on diffusion operators. Our approach involves machine learning techniques based on analysis of joint data- dependent graphs and their associated diffusion kernels. Then, the significant eigenvectors of the derived fused graph Laplace and Schroedinger operators form the new representation, which provides integrated features from the heterogeneous input data. The families of data-dependent Laplace and Schroedinger operators on joint data graphs, shall be integrated by means of appropriately designed fusion metrics. These fused representations are used for target and anomaly detection.

  7. Mask patterning process using the negative tone chemically amplified resist TOK OEBR-CAN024

    NASA Astrophysics Data System (ADS)

    Irmscher, Mathias; Beyer, Dirk; Butschke, Joerg; Hudek, Peter; Koepernik, Corinna; Plumhoff, Jason; Rausa, Emmanuel; Sato, Mitsuru; Voehringer, Peter

    2004-08-01

    Optimized process parameters using the TOK OEBR-CAN024 resist for high chrome load patterning have been determined. A tight linearity tolerance for opaque and clear features, independent on the local pattern density, was the goal of our process integration work. For this purpose we evaluated a new correction method taking into account electron scattering and process influences. The method is based on matching of measured pattern geometry by iterative back-simulation using multiple Gauss and/or exponential functions. The obtained control function acts as input for the proximity correction software PROXECCO. Approaches with different pattern oversize and two Cr thicknesses were accomplished and the results have been reported. Isolated opaque and clear lines could be realized in a very tight linearity range. The increasing line width of small dense lines, induced by the etching process, could be corrected only partially.

  8. Transient visual pathway critical for normal development of primate grasping behavior.

    PubMed

    Mundinano, Inaki-Carril; Fox, Dylan M; Kwan, William C; Vidaurre, Diego; Teo, Leon; Homman-Ludiye, Jihane; Goodale, Melvyn A; Leopold, David A; Bourne, James A

    2018-02-06

    An evolutionary hallmark of anthropoid primates, including humans, is the use of vision to guide precise manual movements. These behaviors are reliant on a specialized visual input to the posterior parietal cortex. Here, we show that normal primate reaching-and-grasping behavior depends critically on a visual pathway through the thalamic pulvinar, which is thought to relay information to the middle temporal (MT) area during early life and then swiftly withdraws. Small MRI-guided lesions to a subdivision of the inferior pulvinar subnucleus (PIm) in the infant marmoset monkey led to permanent deficits in reaching-and-grasping behavior in the adult. This functional loss coincided with the abnormal anatomical development of multiple cortical areas responsible for the guidance of actions. Our study reveals that the transient retino-pulvinar-MT pathway underpins the development of visually guided manual behaviors in primates that are crucial for interacting with complex features in the environment.

  9. Recurrent cerebellar architecture solves the motor-error problem.

    PubMed Central

    Porrill, John; Dean, Paul; Stone, James V.

    2004-01-01

    Current views of cerebellar function have been heavily influenced by the models of Marr and Albus, who suggested that the climbing fibre input to the cerebellum acts as a teaching signal for motor learning. It is commonly assumed that this teaching signal must be motor error (the difference between actual and correct motor command), but this approach requires complex neural structures to estimate unobservable motor error from its observed sensory consequences. We have proposed elsewhere a recurrent decorrelation control architecture in which Marr-Albus models learn without requiring motor error. Here, we prove convergence for this architecture and demonstrate important advantages for the modular control of systems with multiple degrees of freedom. These results are illustrated by modelling adaptive plant compensation for the three-dimensional vestibular ocular reflex. This provides a functional role for recurrent cerebellar connectivity, which may be a generic anatomical feature of projections between regions of cerebral and cerebellar cortex. PMID:15255096

  10. Locally excitatory, globally inhibitory oscillator networks: theory and application to scene segmentation

    NASA Astrophysics Data System (ADS)

    Wang, DeLiang; Terman, David

    1995-01-01

    A novel class of locally excitatory, globally inhibitory oscillator networks (LEGION) is proposed and investigated analytically and by computer simulation. The model of each oscillator corresponds to a standard relaxation oscillator with two time scales. The network exhibits a mechanism of selective gating, whereby an oscillator jumping up to its active phase rapidly recruits the oscillators stimulated by the same pattern, while preventing other oscillators from jumping up. We show analytically that with the selective gating mechanism the network rapidly achieves both synchronization within blocks of oscillators that are stimulated by connected regions and desynchronization between different blocks. Computer simulations demonstrate LEGION's promising ability for segmenting multiple input patterns in real time. This model lays a physical foundation for the oscillatory correlation theory of feature binding, and may provide an effective computational framework for scene segmentation and figure/ground segregation.

  11. A high-speed digital signal processor for atmospheric radar, part 7.3A

    NASA Technical Reports Server (NTRS)

    Brosnahan, J. W.; Woodard, D. M.

    1984-01-01

    The Model SP-320 device is a monolithic realization of a complex general purpose signal processor, incorporating such features as a 32-bit ALU, a 16-bit x 16-bit combinatorial multiplier, and a 16-bit barrel shifter. The SP-320 is designed to operate as a slave processor to a host general purpose computer in applications such as coherent integration of a radar return signal in multiple ranges, or dedicated FFT processing. Presently available is an I/O module conforming to the Intel Multichannel interface standard; other I/O modules will be designed to meet specific user requirements. The main processor board includes input and output FIFO (First In First Out) memories, both with depths of 4096 W, to permit asynchronous operation between the source of data and the host computer. This design permits burst data rates in excess of 5 MW/s.

  12. Distributed Adaptive Finite-Time Approach for Formation-Containment Control of Networked Nonlinear Systems Under Directed Topology.

    PubMed

    Wang, Yujuan; Song, Yongduan; Ren, Wei

    2017-07-06

    This paper presents a distributed adaptive finite-time control solution to the formation-containment problem for multiple networked systems with uncertain nonlinear dynamics and directed communication constraints. By integrating the special topology feature of the new constructed symmetrical matrix, the technical difficulty in finite-time formation-containment control arising from the asymmetrical Laplacian matrix under single-way directed communication is circumvented. Based upon fractional power feedback of the local error, an adaptive distributed control scheme is established to drive the leaders into the prespecified formation configuration in finite time. Meanwhile, a distributed adaptive control scheme, independent of the unavailable inputs of the leaders, is designed to keep the followers within a bounded distance from the moving leaders and then to make the followers enter the convex hull shaped by the formation of the leaders in finite time. The effectiveness of the proposed control scheme is confirmed by the simulation.

  13. Sensor modeling and demonstration of a multi-object spectrometer for performance-driven sensing

    NASA Astrophysics Data System (ADS)

    Kerekes, John P.; Presnar, Michael D.; Fourspring, Kenneth D.; Ninkov, Zoran; Pogorzala, David R.; Raisanen, Alan D.; Rice, Andrew C.; Vasquez, Juan R.; Patel, Jeffrey P.; MacIntyre, Robert T.; Brown, Scott D.

    2009-05-01

    A novel multi-object spectrometer (MOS) is being explored for use as an adaptive performance-driven sensor that tracks moving targets. Developed originally for astronomical applications, the instrument utilizes an array of micromirrors to reflect light to a panchromatic imaging array. When an object of interest is detected the individual micromirrors imaging the object are tilted to reflect the light to a spectrometer to collect a full spectrum. This paper will present example sensor performance from empirical data collected in laboratory experiments, as well as our approach in designing optical and radiometric models of the MOS channels and the micromirror array. Simulation of moving vehicles in a highfidelity, hyperspectral scene is used to generate a dynamic video input for the adaptive sensor. Performance-driven algorithms for feature-aided target tracking and modality selection exploit multiple electromagnetic observables to track moving vehicle targets.

  14. Elbow joint angle and elbow movement velocity estimation using NARX-multiple layer perceptron neural network model with surface EMG time domain parameters.

    PubMed

    Raj, Retheep; Sivanandan, K S

    2017-01-01

    Estimation of elbow dynamics has been the object of numerous investigations. In this work a solution is proposed for estimating elbow movement velocity and elbow joint angle from Surface Electromyography (SEMG) signals. Here the Surface Electromyography signals are acquired from the biceps brachii muscle of human hand. Two time-domain parameters, Integrated EMG (IEMG) and Zero Crossing (ZC), are extracted from the Surface Electromyography signal. The relationship between the time domain parameters, IEMG and ZC with elbow angular displacement and elbow angular velocity during extension and flexion of the elbow are studied. A multiple input-multiple output model is derived for identifying the kinematics of elbow. A Nonlinear Auto Regressive with eXogenous inputs (NARX) structure based multiple layer perceptron neural network (MLPNN) model is proposed for the estimation of elbow joint angle and elbow angular velocity. The proposed NARX MLPNN model is trained using Levenberg-marquardt based algorithm. The proposed model is estimating the elbow joint angle and elbow movement angular velocity with appreciable accuracy. The model is validated using regression coefficient value (R). The average regression coefficient value (R) obtained for elbow angular displacement prediction is 0.9641 and for the elbow anglular velocity prediction is 0.9347. The Nonlinear Auto Regressive with eXogenous inputs (NARX) structure based multiple layer perceptron neural networks (MLPNN) model can be used for the estimation of angular displacement and movement angular velocity of the elbow with good accuracy.

  15. Iterative MMSE Detection for MIMO/BLAST DS-CDMA Systems in Frequency Selective Fading Channels - Achieving High Performance in Fully Loaded Systems

    NASA Astrophysics Data System (ADS)

    Silva, João Carlos; Souto, Nuno; Cercas, Francisco; Dinis, Rui

    A MMSE (Minimum Mean Square Error) DS-CDMA (Direct Sequence-Code Division Multiple Access) receiver coupled with a low-complexity iterative interference suppression algorithm was devised for a MIMO/BLAST (Multiple Input, Multiple Output / Bell Laboratories Layered Space Time) system in order to improve system performance, considering frequency selective fading channels. The scheme is compared against the simple MMSE receiver, for both QPSK and 16QAM modulations, under SISO (Single Input, Single Output) and MIMO systems, the latter with 2Tx by 2Rx and 4Tx by 4Rx (MIMO order 2 and 4 respectively) antennas. To assess its performance in an existing system, the uncoded UMTS HSDPA (High Speed Downlink Packet Access) standard was considered.

  16. A new formation control of multiple underactuated surface vessels

    NASA Astrophysics Data System (ADS)

    Xie, Wenjing; Ma, Baoli; Fernando, Tyrone; Iu, Herbert Ho-Ching

    2018-05-01

    This work investigates a new formation control problem of multiple underactuated surface vessels. The controller design is based on input-output linearisation technique, graph theory, consensus idea and some nonlinear tools. The proposed smooth time-varying distributed control law guarantees that the multiple underactuated surface vessels globally exponentially converge to some desired geometric shape, which is especially centred at the initial average position of vessels. Furthermore, the stability analysis of zero dynamics proves that the orientations of vessels tend to some constants that are dependent on the initial values of vessels, and the velocities and control inputs of the vessels decay to zero. All the results are obtained under the communication scenarios of static directed balanced graph with a spanning tree. Effectiveness of the proposed distributed control scheme is demonstrated using a simulation example.

  17. Coincidence detection in the medial superior olive: mechanistic implications of an analysis of input spiking patterns

    PubMed Central

    Franken, Tom P.; Bremen, Peter; Joris, Philip X.

    2014-01-01

    Coincidence detection by binaural neurons in the medial superior olive underlies sensitivity to interaural time difference (ITD) and interaural correlation (ρ). It is unclear whether this process is akin to a counting of individual coinciding spikes, or rather to a correlation of membrane potential waveforms resulting from converging inputs from each side. We analyzed spike trains of axons of the cat trapezoid body (TB) and auditory nerve (AN) in a binaural coincidence scheme. ITD was studied by delaying “ipsi-” vs. “contralateral” inputs; ρ was studied by using responses to different noises. We varied the number of inputs; the monaural and binaural threshold and the coincidence window duration. We examined physiological plausibility of output “spike trains” by comparing their rate and tuning to ITD and ρ to those of binaural cells. We found that multiple inputs are required to obtain a plausible output spike rate. In contrast to previous suggestions, monaural threshold almost invariably needed to exceed binaural threshold. Elevation of the binaural threshold to values larger than 2 spikes caused a drastic decrease in rate for a short coincidence window. Longer coincidence windows allowed a lower number of inputs and higher binaural thresholds, but decreased the depth of modulation. Compared to AN fibers, TB fibers allowed higher output spike rates for a low number of inputs, but also generated more monaural coincidences. We conclude that, within the parameter space explored, the temporal patterns of monaural fibers require convergence of multiple inputs to achieve physiological binaural spike rates; that monaural coincidences have to be suppressed relative to binaural ones; and that the neuron has to be sensitive to single binaural coincidences of spikes, for a number of excitatory inputs per side of 10 or less. These findings suggest that the fundamental operation in the mammalian binaural circuit is coincidence counting of single binaural input spikes. PMID:24822037

  18. Unitary vs Multiple Semantics: PET Studies of Word and Picture Processing

    ERIC Educational Resources Information Center

    Bright, P.; Moss, H.; Tyler, L. K.

    2004-01-01

    In this paper we examine a central issue in cognitive neuroscience: are there separate conceptual representations associated with different input modalities (e.g., Paivio, 1971, 1986; Warrington & Shallice, 1984) or do inputs from different modalities converge on to the same set of representations (e.g., Caramazza, Hillis, Rapp, & Romani, 1990;…

  19. 40 CFR 60.155 - Reporting.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    .... (a) The owner or operator of any multiple hearth, fluidized bed, or electric sludge incinerator... kg/Mg (0.75 lb/ton) dry sludge input or less during the most recent performance test, a scrubber... particulate matter emission rate of greater than 0.38 kg/Mg (0.75 lb/ton) dry sludge input during the most...

  20. 40 CFR 60.155 - Reporting.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    .... (a) The owner or operator of any multiple hearth, fluidized bed, or electric sludge incinerator... kg/Mg (0.75 lb/ton) dry sludge input or less during the most recent performance test, a scrubber... particulate matter emission rate of greater than 0.38 kg/Mg (0.75 lb/ton) dry sludge input during the most...

Top