Sample records for input feature space

  1. Feature space trajectory for distorted-object classification and pose estimation in synthetic aperture radar

    NASA Astrophysics Data System (ADS)

    Casasent, David P.; Shenoy, Rajesh

    1997-10-01

    Classification and pose estimation of distorted input objects are considered. The feature space trajectory representation of distorted views of an object is used with a new eigenfeature space. For a distorted input object, the closest trajectory denotes the class of the input and the closest line segment on it denotes its pose. If an input point is too far from a trajectory, it is rejected as clutter. New methods for selecting Fukunaga-Koontz discriminant vectors, the number of dominant eigenvectors per class and for determining training, and test set compatibility are presented.

  2. Single neuron computation: from dynamical system to feature detector.

    PubMed

    Hong, Sungho; Agüera y Arcas, Blaise; Fairhall, Adrienne L

    2007-12-01

    White noise methods are a powerful tool for characterizing the computation performed by neural systems. These methods allow one to identify the feature or features that a neural system extracts from a complex input and to determine how these features are combined to drive the system's spiking response. These methods have also been applied to characterize the input-output relations of single neurons driven by synaptic inputs, simulated by direct current injection. To interpret the results of white noise analysis of single neurons, we would like to understand how the obtained feature space of a single neuron maps onto the biophysical properties of the membrane, in particular, the dynamics of ion channels. Here, through analysis of a simple dynamical model neuron, we draw explicit connections between the output of a white noise analysis and the underlying dynamical system. We find that under certain assumptions, the form of the relevant features is well defined by the parameters of the dynamical system. Further, we show that under some conditions, the feature space is spanned by the spike-triggered average and its successive order time derivatives.

  3. Intelligent Fault Diagnosis of HVCB with Feature Space Optimization-Based Random Forest

    PubMed Central

    Ma, Suliang; Wu, Jianwen; Wang, Yuhao; Jia, Bowen; Jiang, Yuan

    2018-01-01

    Mechanical faults of high-voltage circuit breakers (HVCBs) always happen over long-term operation, so extracting the fault features and identifying the fault type have become a key issue for ensuring the security and reliability of power supply. Based on wavelet packet decomposition technology and random forest algorithm, an effective identification system was developed in this paper. First, compared with the incomplete description of Shannon entropy, the wavelet packet time-frequency energy rate (WTFER) was adopted as the input vector for the classifier model in the feature selection procedure. Then, a random forest classifier was used to diagnose the HVCB fault, assess the importance of the feature variable and optimize the feature space. Finally, the approach was verified based on actual HVCB vibration signals by considering six typical fault classes. The comparative experiment results show that the classification accuracy of the proposed method with the origin feature space reached 93.33% and reached up to 95.56% with optimized input feature vector of classifier. This indicates that feature optimization procedure is successful, and the proposed diagnosis algorithm has higher efficiency and robustness than traditional methods. PMID:29659548

  4. Fast metabolite identification with Input Output Kernel Regression.

    PubMed

    Brouard, Céline; Shen, Huibin; Dührkop, Kai; d'Alché-Buc, Florence; Böcker, Sebastian; Rousu, Juho

    2016-06-15

    An important problematic of metabolomics is to identify metabolites using tandem mass spectrometry data. Machine learning methods have been proposed recently to solve this problem by predicting molecular fingerprint vectors and matching these fingerprints against existing molecular structure databases. In this work we propose to address the metabolite identification problem using a structured output prediction approach. This type of approach is not limited to vector output space and can handle structured output space such as the molecule space. We use the Input Output Kernel Regression method to learn the mapping between tandem mass spectra and molecular structures. The principle of this method is to encode the similarities in the input (spectra) space and the similarities in the output (molecule) space using two kernel functions. This method approximates the spectra-molecule mapping in two phases. The first phase corresponds to a regression problem from the input space to the feature space associated to the output kernel. The second phase is a preimage problem, consisting in mapping back the predicted output feature vectors to the molecule space. We show that our approach achieves state-of-the-art accuracy in metabolite identification. Moreover, our method has the advantage of decreasing the running times for the training step and the test step by several orders of magnitude over the preceding methods. celine.brouard@aalto.fi Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.

  5. Fast metabolite identification with Input Output Kernel Regression

    PubMed Central

    Brouard, Céline; Shen, Huibin; Dührkop, Kai; d'Alché-Buc, Florence; Böcker, Sebastian; Rousu, Juho

    2016-01-01

    Motivation: An important problematic of metabolomics is to identify metabolites using tandem mass spectrometry data. Machine learning methods have been proposed recently to solve this problem by predicting molecular fingerprint vectors and matching these fingerprints against existing molecular structure databases. In this work we propose to address the metabolite identification problem using a structured output prediction approach. This type of approach is not limited to vector output space and can handle structured output space such as the molecule space. Results: We use the Input Output Kernel Regression method to learn the mapping between tandem mass spectra and molecular structures. The principle of this method is to encode the similarities in the input (spectra) space and the similarities in the output (molecule) space using two kernel functions. This method approximates the spectra-molecule mapping in two phases. The first phase corresponds to a regression problem from the input space to the feature space associated to the output kernel. The second phase is a preimage problem, consisting in mapping back the predicted output feature vectors to the molecule space. We show that our approach achieves state-of-the-art accuracy in metabolite identification. Moreover, our method has the advantage of decreasing the running times for the training step and the test step by several orders of magnitude over the preceding methods. Availability and implementation: Contact: celine.brouard@aalto.fi Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27307628

  6. Growing a hypercubical output space in a self-organizing feature map.

    PubMed

    Bauer, H U; Villmann, T

    1997-01-01

    Neural maps project data from an input space onto a neuron position in a (often lower dimensional) output space grid in a neighborhood preserving way, with neighboring neurons in the output space responding to neighboring data points in the input space. A map-learning algorithm can achieve an optimal neighborhood preservation only, if the output space topology roughly matches the effective structure of the data in the input space. We here present a growth algorithm, called the GSOM or growing self-organizing map, which enhances a widespread map self-organization process, Kohonen's self-organizing feature map (SOFM), by an adaptation of the output space grid during learning. The GSOM restricts the output space structure to the shape of a general hypercubical shape, with the overall dimensionality of the grid and its extensions along the different directions being subject of the adaptation. This constraint meets the demands of many larger information processing systems, of which the neural map can be a part. We apply our GSOM-algorithm to three examples, two of which involve real world data. Using recently developed methods for measuring the degree of neighborhood preservation in neural maps, we find the GSOM-algorithm to produce maps which preserve neighborhoods in a nearly optimal fashion.

  7. Multiclass Bayes error estimation by a feature space sampling technique

    NASA Technical Reports Server (NTRS)

    Mobasseri, B. G.; Mcgillem, C. D.

    1979-01-01

    A general Gaussian M-class N-feature classification problem is defined. An algorithm is developed that requires the class statistics as its only input and computes the minimum probability of error through use of a combined analytical and numerical integration over a sequence simplifying transformations of the feature space. The results are compared with those obtained by conventional techniques applied to a 2-class 4-feature discrimination problem with results previously reported and 4-class 4-feature multispectral scanner Landsat data classified by training and testing of the available data.

  8. Surrogate modelling for the prediction of spatial fields based on simultaneous dimensionality reduction of high-dimensional input/output spaces.

    PubMed

    Crevillén-García, D

    2018-04-01

    Time-consuming numerical simulators for solving groundwater flow and dissolution models of physico-chemical processes in deep aquifers normally require some of the model inputs to be defined in high-dimensional spaces in order to return realistic results. Sometimes, the outputs of interest are spatial fields leading to high-dimensional output spaces. Although Gaussian process emulation has been satisfactorily used for computing faithful and inexpensive approximations of complex simulators, these have been mostly applied to problems defined in low-dimensional input spaces. In this paper, we propose a method for simultaneously reducing the dimensionality of very high-dimensional input and output spaces in Gaussian process emulators for stochastic partial differential equation models while retaining the qualitative features of the original models. This allows us to build a surrogate model for the prediction of spatial fields in such time-consuming simulators. We apply the methodology to a model of convection and dissolution processes occurring during carbon capture and storage.

  9. Clustering of Multi-Temporal Fully Polarimetric L-Band SAR Data for Agricultural Land Cover Mapping

    NASA Astrophysics Data System (ADS)

    Tamiminia, H.; Homayouni, S.; Safari, A.

    2015-12-01

    Recently, the unique capabilities of Polarimetric Synthetic Aperture Radar (PolSAR) sensors make them an important and efficient tool for natural resources and environmental applications, such as land cover and crop classification. The aim of this paper is to classify multi-temporal full polarimetric SAR data using kernel-based fuzzy C-means clustering method, over an agricultural region. This method starts with transforming input data into the higher dimensional space using kernel functions and then clustering them in the feature space. Feature space, due to its inherent properties, has the ability to take in account the nonlinear and complex nature of polarimetric data. Several SAR polarimetric features extracted using target decomposition algorithms. Features from Cloude-Pottier, Freeman-Durden and Yamaguchi algorithms used as inputs for the clustering. This method was applied to multi-temporal UAVSAR L-band images acquired over an agricultural area near Winnipeg, Canada, during June and July in 2012. The results demonstrate the efficiency of this approach with respect to the classical methods. In addition, using multi-temporal data in the clustering process helped to investigate the phenological cycle of plants and significantly improved the performance of agricultural land cover mapping.

  10. Oversampling the Minority Class in the Feature Space.

    PubMed

    Perez-Ortiz, Maria; Gutierrez, Pedro Antonio; Tino, Peter; Hervas-Martinez, Cesar

    2016-09-01

    The imbalanced nature of some real-world data is one of the current challenges for machine learning researchers. One common approach oversamples the minority class through convex combination of its patterns. We explore the general idea of synthetic oversampling in the feature space induced by a kernel function (as opposed to input space). If the kernel function matches the underlying problem, the classes will be linearly separable and synthetically generated patterns will lie on the minority class region. Since the feature space is not directly accessible, we use the empirical feature space (EFS) (a Euclidean space isomorphic to the feature space) for oversampling purposes. The proposed method is framed in the context of support vector machines, where the imbalanced data sets can pose a serious hindrance. The idea is investigated in three scenarios: 1) oversampling in the full and reduced-rank EFSs; 2) a kernel learning technique maximizing the data class separation to study the influence of the feature space structure (implicitly defined by the kernel function); and 3) a unified framework for preferential oversampling that spans some of the previous approaches in the literature. We support our investigation with extensive experiments over 50 imbalanced data sets.

  11. Position, scale, and rotation invariant holographic associative memory

    NASA Astrophysics Data System (ADS)

    Fielding, Kenneth H.; Rogers, Steven K.; Kabrisky, Matthew; Mills, James P.

    1989-08-01

    This paper describes the development and characterization of a holographic associative memory (HAM) system that is able to recall stored objects whose inputs were changed in position, scale, and rotation. The HAM is based on the single iteration model described by Owechko et al. (1987); however, the system described uses a self-pumped BaTiO3 phase conjugate mirror, rather than a degenerate four-wave mixing proposed by Owechko and his coworkers. The HAM system can store objects in a position, scale, and rotation invariant feature space. The angularly multiplexed diffuse Fourier transform holograms of the HAM feature space are characterized as the memory unit; distorted input objects are correlated with the hologram, and the nonlinear phase conjugate mirror reduces cross-correlation noise and provides object discrimination. Applications of the HAM system are presented.

  12. New nonlinear features for inspection, robotics, and face recognition

    NASA Astrophysics Data System (ADS)

    Casasent, David P.; Talukder, Ashit

    1999-10-01

    Classification of real-time X-ray images of randomly oriented touching pistachio nuts is discussed. The ultimate objective is the development of a system for automated non- invasive detection of defective product items on a conveyor belt. We discuss the extraction of new features that allow better discrimination between damaged and clean items (pistachio nuts). This feature extraction and classification stage is the new aspect of this paper; our new maximum representation and discriminating feature (MRDF) extraction method computes nonlinear features that are used as inputs to a new modified k nearest neighbor classifier. In this work, the MRDF is applied to standard features (rather than iconic data). The MRDF is robust to various probability distributions of the input class and is shown to provide good classification and new ROC (receiver operating characteristic) data. Other applications of these new feature spaces in robotics and face recognition are also noted.

  13. Characteristic features of determining the labor input and estimated cost of the development and manufacture of equipment

    NASA Technical Reports Server (NTRS)

    Kurmanaliyev, T. I.; Breslavets, A. V.

    1974-01-01

    The difficulties in obtaining exact calculation data for the labor input and estimated cost are noted. The method of calculating the labor cost of the design work using the provisional normative indexes with respect to individual forms of operations is proposed. Values of certain coefficients recommended for use in the practical calculations of the labor input for the development of new scientific equipment for space research are presented.

  14. Machinery running state identification based on discriminant semi-supervised local tangent space alignment for feature fusion and extraction

    NASA Astrophysics Data System (ADS)

    Su, Zuqiang; Xiao, Hong; Zhang, Yi; Tang, Baoping; Jiang, Yonghua

    2017-04-01

    Extraction of sensitive features is a challenging but key task in data-driven machinery running state identification. Aimed at solving this problem, a method for machinery running state identification that applies discriminant semi-supervised local tangent space alignment (DSS-LTSA) for feature fusion and extraction is proposed. Firstly, in order to extract more distinct features, the vibration signals are decomposed by wavelet packet decomposition WPD, and a mixed-domain feature set consisted of statistical features, autoregressive (AR) model coefficients, instantaneous amplitude Shannon entropy and WPD energy spectrum is extracted to comprehensively characterize the properties of machinery running state(s). Then, the mixed-dimension feature set is inputted into DSS-LTSA for feature fusion and extraction to eliminate redundant information and interference noise. The proposed DSS-LTSA can extract intrinsic structure information of both labeled and unlabeled state samples, and as a result the over-fitting problem of supervised manifold learning and blindness problem of unsupervised manifold learning are overcome. Simultaneously, class discrimination information is integrated within the dimension reduction process in a semi-supervised manner to improve sensitivity of the extracted fusion features. Lastly, the extracted fusion features are inputted into a pattern recognition algorithm to achieve the running state identification. The effectiveness of the proposed method is verified by a running state identification case in a gearbox, and the results confirm the improved accuracy of the running state identification.

  15. A state-based approach to trend recognition and failure prediction for the Space Station Freedom

    NASA Technical Reports Server (NTRS)

    Nelson, Kyle S.; Hadden, George D.

    1992-01-01

    A state-based reasoning approach to trend recognition and failure prediction for the Altitude Determination, and Control System (ADCS) of the Space Station Freedom (SSF) is described. The problem domain is characterized by features (e.g., trends and impending failures) that develop over a variety of time spans, anywhere from several minutes to several years. Our state-based reasoning approach, coupled with intelligent data screening, allows features to be tracked as they develop in a time-dependent manner. That is, each state machine has the ability to encode a time frame for the feature it detects. As features are detected, they are recorded and can be used as input to other state machines, creating a hierarchical feature recognition scheme. Furthermore, each machine can operate independently of the others, allowing simultaneous tracking of features. State-based reasoning was implemented in the trend recognition and the prognostic modules of a prototype Space Station Freedom Maintenance and Diagnostic System (SSFMDS) developed at Honeywell's Systems and Research Center.

  16. Inverse Proportional Relationship Between Switching-Time Length and Fractal-Like Structure for Continuous Tracking Movement

    NASA Astrophysics Data System (ADS)

    Hirakawa, Takehito; Suzuki, Hiroo; Gohara, Kazutoshi; Yamamoto, Yuji

    We investigate the relationship between the switching-time length T and the fractal-like feature that characterizes the behavior of dissipative dynamical systems excited by external temporal inputs for tracking movement. Seven healthy right-handed male participants were asked to continuously track light-emitting diodes that were located on the right and left sides in front of them. These movements were performed under two conditions: when the same input pattern was repeated (the periodic-input condition) and when two different input patterns were switched stochastically (the switching-input condition). The repeated time lengths of input patterns during these conditions were 2.00, 1.00, 0.75, 0.50, 0.35, and 0.25s. The movements of a lever held between a participant’s thumb and index finger were measured by a motion-capture system and were analyzed with respect to position and velocity. The condition in which the same input was repeated revealed that two different stable trajectories existed in a cylindrical state space, while the condition in which the inputs were switched induced transitions between these two trajectories. These two different trajectories were considered as excited attractors. The transitions between the two excited attractors produced eight trajectories; they were then characterized by a fractal-like feature as a third-order sequence effect. Moreover, correlation dimensions, which are typically used to evaluate fractal-like features, calculated from the set on the Poincaré section increased as the switching-time length T decreased. These results suggest that an inverse proportional relationship exists between the switching-time length T and the fractal-like feature of human movement.

  17. Multi-perspective analysis and spatiotemporal mapping of air pollution monitoring data.

    PubMed

    Kolovos, Alexander; Skupin, André; Jerrett, Michael; Christakos, George

    2010-09-01

    Space-time data analysis and assimilation techniques in atmospheric sciences typically consider input from monitoring measurements. The input is often processed in a manner that acknowledges characteristics of the measurements (e.g., underlying patterns, fluctuation features) under conditions of uncertainty; it also leads to the derivation of secondary information that serves study-oriented goals, and provides input to space-time prediction techniques. We present a novel approach that blends a rigorous space-time prediction model (Bayesian maximum entropy, BME) with a cognitively informed visualization of high-dimensional data (spatialization). The combined BME and spatialization approach (BME-S) is used to study monthly averaged NO2 and mean annual SO4 measurements in California over the 15-year period 1988-2002. Using the original scattered measurements of these two pollutants BME generates spatiotemporal predictions on a regular grid across the state. Subsequently, the prediction network undergoes the spatialization transformation into a lower-dimensional geometric representation, aimed at revealing patterns and relationships that exist within the input data. The proposed BME-S provides a powerful spatiotemporal framework to study a variety of air pollution data sources.

  18. Modelling effects on grid cells of sensory input during self‐motion

    PubMed Central

    Raudies, Florian; Hinman, James R.

    2016-01-01

    Abstract The neural coding of spatial location for memory function may involve grid cells in the medial entorhinal cortex, but the mechanism of generating the spatial responses of grid cells remains unclear. This review describes some current theories and experimental data concerning the role of sensory input in generating the regular spatial firing patterns of grid cells, and changes in grid cell firing fields with movement of environmental barriers. As described here, the influence of visual features on spatial firing could involve either computations of self‐motion based on optic flow, or computations of absolute position based on the angle and distance of static visual cues. Due to anatomical selectivity of retinotopic processing, the sensory features on the walls of an environment may have a stronger effect on ventral grid cells that have wider spaced firing fields, whereas the sensory features on the ground plane may influence the firing of dorsal grid cells with narrower spacing between firing fields. These sensory influences could contribute to the potential functional role of grid cells in guiding goal‐directed navigation. PMID:27094096

  19. Features Extraction of Flotation Froth Images and BP Neural Network Soft-Sensor Model of Concentrate Grade Optimized by Shuffled Cuckoo Searching Algorithm

    PubMed Central

    Wang, Jie-sheng; Han, Shuang; Shen, Na-na; Li, Shu-xia

    2014-01-01

    For meeting the forecasting target of key technology indicators in the flotation process, a BP neural network soft-sensor model based on features extraction of flotation froth images and optimized by shuffled cuckoo search algorithm is proposed. Based on the digital image processing technique, the color features in HSI color space, the visual features based on the gray level cooccurrence matrix, and the shape characteristics based on the geometric theory of flotation froth images are extracted, respectively, as the input variables of the proposed soft-sensor model. Then the isometric mapping method is used to reduce the input dimension, the network size, and learning time of BP neural network. Finally, a shuffled cuckoo search algorithm is adopted to optimize the BP neural network soft-sensor model. Simulation results show that the model has better generalization results and prediction accuracy. PMID:25133210

  20. GrouseFlocks: steerable exploration of graph hierarchy space.

    PubMed

    Archambault, Daniel; Munzner, Tamara; Auber, David

    2008-01-01

    Several previous systems allow users to interactively explore a large input graph through cuts of a superimposed hierarchy. This hierarchy is often created using clustering algorithms or topological features present in the graph. However, many graphs have domain-specific attributes associated with the nodes and edges, which could be used to create many possible hierarchies providing unique views of the input graph. GrouseFlocks is a system for the exploration of this graph hierarchy space. By allowing users to see several different possible hierarchies on the same graph, the system helps users investigate graph hierarchy space instead of a single fixed hierarchy. GrouseFlocks provides a simple set of operations so that users can create and modify their graph hierarchies based on selections. These selections can be made manually or based on patterns in the attribute data provided with the graph. It provides feedback to the user within seconds, allowing interactive exploration of this space.

  1. A soft computing based approach using modified selection strategy for feature reduction of medical systems.

    PubMed

    Zuhtuogullari, Kursat; Allahverdi, Novruz; Arikan, Nihat

    2013-01-01

    The systems consisting high input spaces require high processing times and memory usage. Most of the attribute selection algorithms have the problems of input dimensions limits and information storage problems. These problems are eliminated by means of developed feature reduction software using new modified selection mechanism with middle region solution candidates adding. The hybrid system software is constructed for reducing the input attributes of the systems with large number of input variables. The designed software also supports the roulette wheel selection mechanism. Linear order crossover is used as the recombination operator. In the genetic algorithm based soft computing methods, locking to the local solutions is also a problem which is eliminated by using developed software. Faster and effective results are obtained in the test procedures. Twelve input variables of the urological system have been reduced to the reducts (reduced input attributes) with seven, six, and five elements. It can be seen from the obtained results that the developed software with modified selection has the advantages in the fields of memory allocation, execution time, classification accuracy, sensitivity, and specificity values when compared with the other reduction algorithms by using the urological test data.

  2. A Soft Computing Based Approach Using Modified Selection Strategy for Feature Reduction of Medical Systems

    PubMed Central

    Zuhtuogullari, Kursat; Allahverdi, Novruz; Arikan, Nihat

    2013-01-01

    The systems consisting high input spaces require high processing times and memory usage. Most of the attribute selection algorithms have the problems of input dimensions limits and information storage problems. These problems are eliminated by means of developed feature reduction software using new modified selection mechanism with middle region solution candidates adding. The hybrid system software is constructed for reducing the input attributes of the systems with large number of input variables. The designed software also supports the roulette wheel selection mechanism. Linear order crossover is used as the recombination operator. In the genetic algorithm based soft computing methods, locking to the local solutions is also a problem which is eliminated by using developed software. Faster and effective results are obtained in the test procedures. Twelve input variables of the urological system have been reduced to the reducts (reduced input attributes) with seven, six, and five elements. It can be seen from the obtained results that the developed software with modified selection has the advantages in the fields of memory allocation, execution time, classification accuracy, sensitivity, and specificity values when compared with the other reduction algorithms by using the urological test data. PMID:23573172

  3. Application of the wavelet transform for speech processing

    NASA Technical Reports Server (NTRS)

    Maes, Stephane

    1994-01-01

    Speaker identification and word spotting will shortly play a key role in space applications. An approach based on the wavelet transform is presented that, in the context of the 'modulation model,' enables extraction of speech features which are used as input for the classification process.

  4. Using learning automata to determine proper subset size in high-dimensional spaces

    NASA Astrophysics Data System (ADS)

    Seyyedi, Seyyed Hossein; Minaei-Bidgoli, Behrouz

    2017-03-01

    In this paper, we offer a new method called FSLA (Finding the best candidate Subset using Learning Automata), which combines the filter and wrapper approaches for feature selection in high-dimensional spaces. Considering the difficulties of dimension reduction in high-dimensional spaces, FSLA's multi-objective functionality is to determine, in an efficient manner, a feature subset that leads to an appropriate tradeoff between the learning algorithm's accuracy and efficiency. First, using an existing weighting function, the feature list is sorted and selected subsets of the list of different sizes are considered. Then, a learning automaton verifies the performance of each subset when it is used as the input space of the learning algorithm and estimates its fitness upon the algorithm's accuracy and the subset size, which determines the algorithm's efficiency. Finally, FSLA introduces the fittest subset as the best choice. We tested FSLA in the framework of text classification. The results confirm its promising performance of attaining the identified goal.

  5. On-Line Pattern Analysis and Recognition System. OLPARS VI. Software Reference Manual,

    DTIC Science & Technology

    1982-06-18

    Discriminant Analysis Data Transformation, Feature Extraction, Feature Evaluation Cluster Analysis, Classification Computer Software 20Z. ABSTRACT... cluster /scatter cut-off value, (2) change the one-space bin factor, (3) change from long prompts to short prompts or vice versa, (4) change the...value, a cluster plot is displayed, otherwise a scatter plot is shown. if option 1 is selected, the program requests that a new value be input

  6. Unsupervised texture image segmentation by improved neural network ART2

    NASA Technical Reports Server (NTRS)

    Wang, Zhiling; Labini, G. Sylos; Mugnuolo, R.; Desario, Marco

    1994-01-01

    We here propose a segmentation algorithm of texture image for a computer vision system on a space robot. An improved adaptive resonance theory (ART2) for analog input patterns is adapted to classify the image based on a set of texture image features extracted by a fast spatial gray level dependence method (SGLDM). The nonlinear thresholding functions in input layer of the neural network have been constructed by two parts: firstly, to reduce the effects of image noises on the features, a set of sigmoid functions is chosen depending on the types of the feature; secondly, to enhance the contrast of the features, we adopt fuzzy mapping functions. The cluster number in output layer can be increased by an autogrowing mechanism constantly when a new pattern happens. Experimental results and original or segmented pictures are shown, including the comparison between this approach and K-means algorithm. The system written in C language is performed on a SUN-4/330 sparc-station with an image board IT-150 and a CCD camera.

  7. Deep learning of support vector machines with class probability output networks.

    PubMed

    Kim, Sangwook; Yu, Zhibin; Kil, Rhee Man; Lee, Minho

    2015-04-01

    Deep learning methods endeavor to learn features automatically at multiple levels and allow systems to learn complex functions mapping from the input space to the output space for the given data. The ability to learn powerful features automatically is increasingly important as the volume of data and range of applications of machine learning methods continues to grow. This paper proposes a new deep architecture that uses support vector machines (SVMs) with class probability output networks (CPONs) to provide better generalization power for pattern classification problems. As a result, deep features are extracted without additional feature engineering steps, using multiple layers of the SVM classifiers with CPONs. The proposed structure closely approaches the ideal Bayes classifier as the number of layers increases. Using a simulation of classification problems, the effectiveness of the proposed method is demonstrated. Copyright © 2014 Elsevier Ltd. All rights reserved.

  8. QUANTIZING TUBE

    DOEpatents

    Jensen, A.S.; Gray, G.W.

    1958-07-01

    Beam deflection tubes are described for use in switching or pulse amplitude analysis. The salient features of the invention reside in the target arrangement whereby outputs are obtained from a plurality of collector electrodes each correspondlng with a non-overlapping range of amplitudes of the input sigmal. The tube is provded with mcans for deflecting the electron beam a1ong a line in accordance with the amplitude of an input signal. The target structure consists of a first dymode positioned in the path of the beam wlth slots spaced a1ong thc deflection line, and a second dymode posltioned behind the first dainode. When the beam strikes the solid portions along the length of the first dymode the excited electrons are multiplied and collected in separate collector electrodes spaced along the beam line. Similarly, the electrons excited when the beam strikes the second dynode are multiplied and collected in separate electrodes spaced along the length of the second dyode.

  9. Kernel-based Joint Feature Selection and Max-Margin Classification for Early Diagnosis of Parkinson’s Disease

    NASA Astrophysics Data System (ADS)

    Adeli, Ehsan; Wu, Guorong; Saghafi, Behrouz; An, Le; Shi, Feng; Shen, Dinggang

    2017-01-01

    Feature selection methods usually select the most compact and relevant set of features based on their contribution to a linear regression model. Thus, these features might not be the best for a non-linear classifier. This is especially crucial for the tasks, in which the performance is heavily dependent on the feature selection techniques, like the diagnosis of neurodegenerative diseases. Parkinson’s disease (PD) is one of the most common neurodegenerative disorders, which progresses slowly while affects the quality of life dramatically. In this paper, we use the data acquired from multi-modal neuroimaging data to diagnose PD by investigating the brain regions, known to be affected at the early stages. We propose a joint kernel-based feature selection and classification framework. Unlike conventional feature selection techniques that select features based on their performance in the original input feature space, we select features that best benefit the classification scheme in the kernel space. We further propose kernel functions, specifically designed for our non-negative feature types. We use MRI and SPECT data of 538 subjects from the PPMI database, and obtain a diagnosis accuracy of 97.5%, which outperforms all baseline and state-of-the-art methods.

  10. Kernel-based Joint Feature Selection and Max-Margin Classification for Early Diagnosis of Parkinson’s Disease

    PubMed Central

    Adeli, Ehsan; Wu, Guorong; Saghafi, Behrouz; An, Le; Shi, Feng; Shen, Dinggang

    2017-01-01

    Feature selection methods usually select the most compact and relevant set of features based on their contribution to a linear regression model. Thus, these features might not be the best for a non-linear classifier. This is especially crucial for the tasks, in which the performance is heavily dependent on the feature selection techniques, like the diagnosis of neurodegenerative diseases. Parkinson’s disease (PD) is one of the most common neurodegenerative disorders, which progresses slowly while affects the quality of life dramatically. In this paper, we use the data acquired from multi-modal neuroimaging data to diagnose PD by investigating the brain regions, known to be affected at the early stages. We propose a joint kernel-based feature selection and classification framework. Unlike conventional feature selection techniques that select features based on their performance in the original input feature space, we select features that best benefit the classification scheme in the kernel space. We further propose kernel functions, specifically designed for our non-negative feature types. We use MRI and SPECT data of 538 subjects from the PPMI database, and obtain a diagnosis accuracy of 97.5%, which outperforms all baseline and state-of-the-art methods. PMID:28120883

  11. Visual classification of medical data using MLP mapping.

    PubMed

    Cağatay Güler, E; Sankur, B; Kahya, Y P; Raudys, S

    1998-05-01

    In this work we discuss the design of a novel non-linear mapping method for visual classification based on multilayer perceptrons (MLP) and assigned class target values. In training the perceptron, one or more target output values for each class in a 2-dimensional space are used. In other words, class membership information is interpreted visually as closeness to target values in a 2D feature space. This mapping is obtained by training the multilayer perceptron (MLP) using class membership information, input data and judiciously chosen target values. Weights are estimated in such a way that each training feature of the corresponding class is forced to be mapped onto the corresponding 2-dimensional target value.

  12. Program document for Energy Systems Optimization Program 2 (ESOP2). Volume 1: Engineering manual

    NASA Technical Reports Server (NTRS)

    Hamil, R. G.; Ferden, S. L.

    1977-01-01

    The Energy Systems Optimization Program, which is used to provide analyses of Modular Integrated Utility Systems (MIUS), is discussed. Modifications to the input format to allow modular inputs in specified blocks of data are described. An optimization feature which enables the program to search automatically for the minimum value of one parameter while varying the value of other parameters is reported. New program option flags for prime mover analyses and solar energy for space heating and domestic hot water are also covered.

  13. River flow prediction using hybrid models of support vector regression with the wavelet transform, singular spectrum analysis and chaotic approach

    NASA Astrophysics Data System (ADS)

    Baydaroğlu, Özlem; Koçak, Kasım; Duran, Kemal

    2018-06-01

    Prediction of water amount that will enter the reservoirs in the following month is of vital importance especially for semi-arid countries like Turkey. Climate projections emphasize that water scarcity will be one of the serious problems in the future. This study presents a methodology for predicting river flow for the subsequent month based on the time series of observed monthly river flow with hybrid models of support vector regression (SVR). Monthly river flow over the period 1940-2012 observed for the Kızılırmak River in Turkey has been used for training the method, which then has been applied for predictions over a period of 3 years. SVR is a specific implementation of support vector machines (SVMs), which transforms the observed input data time series into a high-dimensional feature space (input matrix) by way of a kernel function and performs a linear regression in this space. SVR requires a special input matrix. The input matrix was produced by wavelet transforms (WT), singular spectrum analysis (SSA), and a chaotic approach (CA) applied to the input time series. WT convolutes the original time series into a series of wavelets, and SSA decomposes the time series into a trend, an oscillatory and a noise component by singular value decomposition. CA uses a phase space formed by trajectories, which represent the dynamics producing the time series. These three methods for producing the input matrix for the SVR proved successful, while the SVR-WT combination resulted in the highest coefficient of determination and the lowest mean absolute error.

  14. Unconstrained handwritten numeral recognition based on radial basis competitive and cooperative networks with spatio-temporal feature representation.

    PubMed

    Lee, S; Pan, J J

    1996-01-01

    This paper presents a new approach to representation and recognition of handwritten numerals. The approach first transforms a two-dimensional (2-D) spatial representation of a numeral into a three-dimensional (3-D) spatio-temporal representation by identifying the tracing sequence based on a set of heuristic rules acting as transformation operators. A multiresolution critical-point segmentation method is then proposed to extract local feature points, at varying degrees of scale and coarseness. A new neural network architecture, referred to as radial-basis competitive and cooperative network (RCCN), is presented especially for handwritten numeral recognition. RCCN is a globally competitive and locally cooperative network with the capability of self-organizing hidden units to progressively achieve desired network performance, and functions as a universal approximator of arbitrary input-output mappings. Three types of RCCNs are explored: input-space RCCN (IRCCN), output-space RCCN (ORCCN), and bidirectional RCCN (BRCCN). Experiments against handwritten zip code numerals acquired by the U.S. Postal Service indicated that the proposed method is robust in terms of variations, deformations, transformations, and corruption, achieving about 97% recognition rate.

  15. Odor Impression Prediction from Mass Spectra.

    PubMed

    Nozaki, Yuji; Nakamoto, Takamichi

    2016-01-01

    The sense of smell arises from the perception of odors from chemicals. However, the relationship between the impression of odor and the numerous physicochemical parameters has yet to be understood owing to its complexity. As such, there is no established general method for predicting the impression of odor of a chemical only from its physicochemical properties. In this study, we designed a novel predictive model based on an artificial neural network with a deep structure for predicting odor impression utilizing the mass spectra of chemicals, and we conducted a series of computational analyses to evaluate its performance. Feature vectors extracted from the original high-dimensional space using two autoencoders equipped with both input and output layers in the model are used to build a mapping function from the feature space of mass spectra to the feature space of sensory data. The results of predictions obtained by the proposed new method have notable accuracy (R≅0.76) in comparison with a conventional method (R≅0.61).

  16. Efficient feature selection using a hybrid algorithm for the task of epileptic seizure detection

    NASA Astrophysics Data System (ADS)

    Lai, Kee Huong; Zainuddin, Zarita; Ong, Pauline

    2014-07-01

    Feature selection is a very important aspect in the field of machine learning. It entails the search of an optimal subset from a very large data set with high dimensional feature space. Apart from eliminating redundant features and reducing computational cost, a good selection of feature also leads to higher prediction and classification accuracy. In this paper, an efficient feature selection technique is introduced in the task of epileptic seizure detection. The raw data are electroencephalography (EEG) signals. Using discrete wavelet transform, the biomedical signals were decomposed into several sets of wavelet coefficients. To reduce the dimension of these wavelet coefficients, a feature selection method that combines the strength of both filter and wrapper methods is proposed. Principal component analysis (PCA) is used as part of the filter method. As for wrapper method, the evolutionary harmony search (HS) algorithm is employed. This metaheuristic method aims at finding the best discriminating set of features from the original data. The obtained features were then used as input for an automated classifier, namely wavelet neural networks (WNNs). The WNNs model was trained to perform a binary classification task, that is, to determine whether a given EEG signal was normal or epileptic. For comparison purposes, different sets of features were also used as input. Simulation results showed that the WNNs that used the features chosen by the hybrid algorithm achieved the highest overall classification accuracy.

  17. Flexible Kernel Memory

    PubMed Central

    Nowicki, Dimitri; Siegelmann, Hava

    2010-01-01

    This paper introduces a new model of associative memory, capable of both binary and continuous-valued inputs. Based on kernel theory, the memory model is on one hand a generalization of Radial Basis Function networks and, on the other, is in feature space, analogous to a Hopfield network. Attractors can be added, deleted, and updated on-line simply, without harming existing memories, and the number of attractors is independent of input dimension. Input vectors do not have to adhere to a fixed or bounded dimensionality; they can increase and decrease it without relearning previous memories. A memory consolidation process enables the network to generalize concepts and form clusters of input data, which outperforms many unsupervised clustering techniques; this process is demonstrated on handwritten digits from MNIST. Another process, reminiscent of memory reconsolidation is introduced, in which existing memories are refreshed and tuned with new inputs; this process is demonstrated on series of morphed faces. PMID:20552013

  18. Cross-entropy embedding of high-dimensional data using the neural gas model.

    PubMed

    Estévez, Pablo A; Figueroa, Cristián J; Saito, Kazumi

    2005-01-01

    A cross-entropy approach to mapping high-dimensional data into a low-dimensional space embedding is presented. The method allows to project simultaneously the input data and the codebook vectors, obtained with the Neural Gas (NG) quantizer algorithm, into a low-dimensional output space. The aim of this approach is to preserve the relationship defined by the NG neighborhood function for each pair of input and codebook vectors. A cost function based on the cross-entropy between input and output probabilities is minimized by using a Newton-Raphson method. The new approach is compared with Sammon's non-linear mapping (NLM) and the hierarchical approach of combining a vector quantizer such as the self-organizing feature map (SOM) or NG with the NLM recall algorithm. In comparison with these techniques, our method delivers a clear visualization of both data points and codebooks, and it achieves a better mapping quality in terms of the topology preservation measure q(m).

  19. Tasks and premises in quantum state determination

    NASA Astrophysics Data System (ADS)

    Carmeli, Claudio; Heinosaari, Teiko; Schultz, Jussi; Toigo, Alessandro

    2014-02-01

    The purpose of quantum tomography is to determine an unknown quantum state from measurement outcome statistics. There are two obvious ways to generalize this setting. First, our task need not be the determination of any possible input state but only some input states, for instance pure states. Second, we may have some prior information, or premise, which guarantees that the input state belongs to some subset of states, for instance the set of states with rank less than half of the dimension of the Hilbert space. We investigate state determination under these two supplemental features, concentrating on the cases where the task and the premise are statements about the rank of the unknown state. We characterize the structure of quantum observables (positive operator valued measures) that are capable of fulfilling these type of determination tasks. After the general treatment we focus on the class of covariant phase space observables, thus providing physically relevant examples of observables both capable and incapable of performing these tasks. In this context, the effect of noise is discussed.

  20. AUTOGEN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2003-05-29

    AUTOGEN computes collision-free sequences of robot motion instructions to permit traversal of three-dimensional space curves. Order and direction of curve traversal and orientation of end effector are constraided by a set of manufacturing rules. Input can be provided as a collection of solid models or in terms of wireframe objects and structural cross-section definitions. Entity juxtaposition can be inferred, with appropriate structural features automatically provided. Process control is asserted as a function of position and orientation along each space curve, and is currently implemented for welding processes.

  1. The design of free structure granular mappings: the use of the principle of justifiable granularity.

    PubMed

    Pedrycz, Witold; Al-Hmouz, Rami; Morfeq, Ali; Balamash, Abdullah

    2013-12-01

    The study introduces a concept of mappings realized in presence of information granules and offers a design framework supporting the formation of such mappings. Information granules are conceptually meaningful entities formed on a basis of a large number of experimental input–output numeric data available for the construction of the model. We develop a conceptually and algorithmically sound way of forming information granules. Considering the directional nature of the mapping to be formed, this directionality aspect needs to be taken into account when developing information granules. The property of directionality implies that while the information granules in the input space could be constructed with a great deal of flexibility, the information granules formed in the output space have to inherently relate to those built in the input space. The input space is granulated by running a clustering algorithm; for illustrative purposes, the focus here is on fuzzy clustering realized with the aid of the fuzzy C-means algorithm. The information granules in the output space are constructed with the aid of the principle of justifiable granularity (being one of the underlying fundamental conceptual pursuits of Granular Computing). The construct exhibits two important features. First, the constructed information granules are formed in the presence of information granules already constructed in the input space (and this realization is reflective of the direction of the mapping from the input to the output space). Second, the principle of justifiable granularity does not confine the realization of information granules to a single formalism such as fuzzy sets but helps form the granules expressed any required formalism of information granulation. The quality of the granular mapping (viz. the mapping realized for the information granules formed in the input and output spaces) is expressed in terms of the coverage criterion (articulating how well the experimental data are “covered” by information granules produced by the granular mapping for any input experimental data). Some parametric studies are reported by quantifying the performance of the granular mapping (expressed in terms of the coverage and specificity criteria) versus the values of a certain parameters utilized in the construction of output information granules through the principle of justifiable granularity. The plots of coverage–specificity dependency help determine a knee point and reach a sound compromise between these two conflicting requirements imposed on the quality of the granular mapping. Furthermore, quantified is the quality of the mapping with regard to the number of information granules (implying a certain granularity of the mapping). A series of experiments is reported as well.

  2. Classification and pose estimation of objects using nonlinear features

    NASA Astrophysics Data System (ADS)

    Talukder, Ashit; Casasent, David P.

    1998-03-01

    A new nonlinear feature extraction method called the maximum representation and discrimination feature (MRDF) method is presented for extraction of features from input image data. It implements transformations similar to the Sigma-Pi neural network. However, the weights of the MRDF are obtained in closed form, and offer advantages compared to nonlinear neural network implementations. The features extracted are useful for both object discrimination (classification) and object representation (pose estimation). We show its use in estimating the class and pose of images of real objects and rendered solid CAD models of machine parts from single views using a feature-space trajectory (FST) neural network classifier. We show more accurate classification and pose estimation results than are achieved by standard principal component analysis (PCA) and Fukunaga-Koontz (FK) feature extraction methods.

  3. Space shuttle main engine fault detection using neural networks

    NASA Technical Reports Server (NTRS)

    Bishop, Thomas; Greenwood, Dan; Shew, Kenneth; Stevenson, Fareed

    1991-01-01

    A method for on-line Space Shuttle Main Engine (SSME) anomaly detection and fault typing using a feedback neural network is described. The method involves the computation of features representing time-variance of SSME sensor parameters, using historical test case data. The network is trained, using backpropagation, to recognize a set of fault cases. The network is then able to diagnose new fault cases correctly. An essential element of the training technique is the inclusion of randomly generated data along with the real data, in order to span the entire input space of potential non-nominal data.

  4. Deep neural mapping support vector machines.

    PubMed

    Li, Yujian; Zhang, Ting

    2017-09-01

    The choice of kernel has an important effect on the performance of a support vector machine (SVM). The effect could be reduced by NEUROSVM, an architecture using multilayer perceptron for feature extraction and SVM for classification. In binary classification, a general linear kernel NEUROSVM can be theoretically simplified as an input layer, many hidden layers, and an SVM output layer. As a feature extractor, the sub-network composed of the input and hidden layers is first trained together with a virtual ordinary output layer by backpropagation, then with the output of its last hidden layer taken as input of the SVM classifier for further training separately. By taking the sub-network as a kernel mapping from the original input space into a feature space, we present a novel model, called deep neural mapping support vector machine (DNMSVM), from the viewpoint of deep learning. This model is also a new and general kernel learning method, where the kernel mapping is indeed an explicit function expressed as a sub-network, different from an implicit function induced by a kernel function traditionally. Moreover, we exploit a two-stage procedure of contrastive divergence learning and gradient descent for DNMSVM to jointly training an adaptive kernel mapping instead of a kernel function, without requirement of kernel tricks. As a whole of the sub-network and the SVM classifier, the joint training of DNMSVM is done by using gradient descent to optimize the objective function with the sub-network layer-wise pre-trained via contrastive divergence learning of restricted Boltzmann machines. Compared to the separate training of NEUROSVM, the joint training is a new algorithm for DNMSVM to have advantages over NEUROSVM. Experimental results show that DNMSVM can outperform NEUROSVM and RBFSVM (i.e., SVM with the kernel of radial basis function), demonstrating its effectiveness. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Machine Learning Classification of Heterogeneous Fields to Estimate Physical Responses

    NASA Astrophysics Data System (ADS)

    McKenna, S. A.; Akhriev, A.; Alzate, C.; Zhuk, S.

    2017-12-01

    The promise of machine learning to enhance physics-based simulation is examined here using the transient pressure response to a pumping well in a heterogeneous aquifer. 10,000 random fields of log10 hydraulic conductivity (K) are created and conditioned on a single K measurement at the pumping well. Each K-field is used as input to a forward simulation of drawdown (pressure decline). The differential equations governing groundwater flow to the well serve as a non-linear transform of the input K-field to an output drawdown field. The results are stored and the data set is split into training and testing sets for classification. A Euclidean distance measure between any two fields is calculated and the resulting distances between all pairs of fields define a similarity matrix. Similarity matrices are calculated for both input K-fields and the resulting drawdown fields at the end of the simulation. The similarity matrices are then used as input to spectral clustering to determine groupings of similar input and output fields. Additionally, the similarity matrix is used as input to multi-dimensional scaling to visualize the clustering of fields in lower dimensional spaces. We examine the ability to cluster both input K-fields and output drawdown fields separately with the goal of identifying K-fields that create similar drawdowns and, conversely, given a set of simulated drawdown fields, identify meaningful clusters of input K-fields. Feature extraction based on statistical parametric mapping provides insight into what features of the fields drive the classification results. The final goal is to successfully classify input K-fields into the correct output class, and also, given an output drawdown field, be able to infer the correct class of input field that created it.

  6. Super-resolution method for face recognition using nonlinear mappings on coherent features.

    PubMed

    Huang, Hua; He, Huiting

    2011-01-01

    Low-resolution (LR) of face images significantly decreases the performance of face recognition. To address this problem, we present a super-resolution method that uses nonlinear mappings to infer coherent features that favor higher recognition of the nearest neighbor (NN) classifiers for recognition of single LR face image. Canonical correlation analysis is applied to establish the coherent subspaces between the principal component analysis (PCA) based features of high-resolution (HR) and LR face images. Then, a nonlinear mapping between HR/LR features can be built by radial basis functions (RBFs) with lower regression errors in the coherent feature space than in the PCA feature space. Thus, we can compute super-resolved coherent features corresponding to an input LR image according to the trained RBF model efficiently and accurately. And, face identity can be obtained by feeding these super-resolved features to a simple NN classifier. Extensive experiments on the Facial Recognition Technology, University of Manchester Institute of Science and Technology, and Olivetti Research Laboratory databases show that the proposed method outperforms the state-of-the-art face recognition algorithms for single LR image in terms of both recognition rate and robustness to facial variations of pose and expression.

  7. Stargate GTM: Bridging Descriptor and Activity Spaces.

    PubMed

    Gaspar, Héléna A; Baskin, Igor I; Marcou, Gilles; Horvath, Dragos; Varnek, Alexandre

    2015-11-23

    Predicting the activity profile of a molecule or discovering structures possessing a specific activity profile are two important goals in chemoinformatics, which could be achieved by bridging activity and molecular descriptor spaces. In this paper, we introduce the "Stargate" version of the Generative Topographic Mapping approach (S-GTM) in which two different multidimensional spaces (e.g., structural descriptor space and activity space) are linked through a common 2D latent space. In the S-GTM algorithm, the manifolds are trained simultaneously in two initial spaces using the probabilities in the 2D latent space calculated as a weighted geometric mean of probability distributions in both spaces. S-GTM has the following interesting features: (1) activities are involved during the training procedure; therefore, the method is supervised, unlike conventional GTM; (2) using molecular descriptors of a given compound as input, the model predicts a whole activity profile, and (3) using an activity profile as input, areas populated by relevant chemical structures can be detected. To assess the performance of S-GTM prediction models, a descriptor space (ISIDA descriptors) of a set of 1325 GPCR ligands was related to a B-dimensional (B = 1 or 8) activity space corresponding to pKi values for eight different targets. S-GTM outperforms conventional GTM for individual activities and performs similarly to the Lasso multitask learning algorithm, although it is still slightly less accurate than the Random Forest method.

  8. Design of double fuzzy clustering-driven context neural networks.

    PubMed

    Kim, Eun-Hu; Oh, Sung-Kwun; Pedrycz, Witold

    2018-08-01

    In this study, we introduce a novel category of double fuzzy clustering-driven context neural networks (DFCCNNs). The study is focused on the development of advanced design methodologies for redesigning the structure of conventional fuzzy clustering-based neural networks. The conventional fuzzy clustering-based neural networks typically focus on dividing the input space into several local spaces (implied by clusters). In contrast, the proposed DFCCNNs take into account two distinct local spaces called context and cluster spaces, respectively. Cluster space refers to the local space positioned in the input space whereas context space concerns a local space formed in the output space. Through partitioning the output space into several local spaces, each context space is used as the desired (target) local output to construct local models. To complete this, the proposed network includes a new context layer for reasoning about context space in the output space. In this sense, Fuzzy C-Means (FCM) clustering is useful to form local spaces in both input and output spaces. The first one is used in order to form clusters and train weights positioned between the input and hidden layer, whereas the other one is applied to the output space to form context spaces. The key features of the proposed DFCCNNs can be enumerated as follows: (i) the parameters between the input layer and hidden layer are built through FCM clustering. The connections (weights) are specified as constant terms being in fact the centers of the clusters. The membership functions (represented through the partition matrix) produced by the FCM are used as activation functions located at the hidden layer of the "conventional" neural networks. (ii) Following the hidden layer, a context layer is formed to approximate the context space of the output variable and each node in context layer means individual local model. The outputs of the context layer are specified as a combination of both weights formed as linear function and the outputs of the hidden layer. The weights are updated using the least square estimation (LSE)-based method. (iii) At the output layer, the outputs of context layer are decoded to produce the corresponding numeric output. At this time, the weighted average is used and the weights are also adjusted with the use of the LSE scheme. From the viewpoint of performance improvement, the proposed design methodologies are discussed and experimented with the aid of benchmark machine learning datasets. Through the experiments, it is shown that the generalization abilities of the proposed DFCCNNs are better than those of the conventional FCNNs reported in the literature. Copyright © 2018 Elsevier Ltd. All rights reserved.

  9. Co-Clustering by Bipartite Spectral Graph Partitioning for Out-of-Tutor Prediction

    ERIC Educational Resources Information Center

    Trivedi, Shubhendu; Pardos, Zachary A.; Sarkozy, Gabor N.; Heffernan, Neil T.

    2012-01-01

    Learning a more distributed representation of the input feature space is a powerful method to boost the performance of a given predictor. Often this is accomplished by partitioning the data into homogeneous groups by clustering so that separate models could be trained on each cluster. Intuitively each such predictor is a better representative of…

  10. Reduced multiple empirical kernel learning machine.

    PubMed

    Wang, Zhe; Lu, MingZhe; Gao, Daqi

    2015-02-01

    Multiple kernel learning (MKL) is demonstrated to be flexible and effective in depicting heterogeneous data sources since MKL can introduce multiple kernels rather than a single fixed kernel into applications. However, MKL would get a high time and space complexity in contrast to single kernel learning, which is not expected in real-world applications. Meanwhile, it is known that the kernel mapping ways of MKL generally have two forms including implicit kernel mapping and empirical kernel mapping (EKM), where the latter is less attracted. In this paper, we focus on the MKL with the EKM, and propose a reduced multiple empirical kernel learning machine named RMEKLM for short. To the best of our knowledge, it is the first to reduce both time and space complexity of the MKL with EKM. Different from the existing MKL, the proposed RMEKLM adopts the Gauss Elimination technique to extract a set of feature vectors, which is validated that doing so does not lose much information of the original feature space. Then RMEKLM adopts the extracted feature vectors to span a reduced orthonormal subspace of the feature space, which is visualized in terms of the geometry structure. It can be demonstrated that the spanned subspace is isomorphic to the original feature space, which means that the dot product of two vectors in the original feature space is equal to that of the two corresponding vectors in the generated orthonormal subspace. More importantly, the proposed RMEKLM brings a simpler computation and meanwhile needs a less storage space, especially in the processing of testing. Finally, the experimental results show that RMEKLM owns a much efficient and effective performance in terms of both complexity and classification. The contributions of this paper can be given as follows: (1) by mapping the input space into an orthonormal subspace, the geometry of the generated subspace is visualized; (2) this paper first reduces both the time and space complexity of the EKM-based MKL; (3) this paper adopts the Gauss Elimination, one of the on-the-shelf techniques, to generate a basis of the original feature space, which is stable and efficient.

  11. High-resolution Self-Organizing Maps for advanced visualization and dimension reduction.

    PubMed

    Saraswati, Ayu; Nguyen, Van Tuc; Hagenbuchner, Markus; Tsoi, Ah Chung

    2018-05-04

    Kohonen's Self Organizing feature Map (SOM) provides an effective way to project high dimensional input features onto a low dimensional display space while preserving the topological relationships among the input features. Recent advances in algorithms that take advantages of modern computing hardware introduced the concept of high resolution SOMs (HRSOMs). This paper investigates the capabilities and applicability of the HRSOM as a visualization tool for cluster analysis and its suitabilities to serve as a pre-processor in ensemble learning models. The evaluation is conducted on a number of established benchmarks and real-world learning problems, namely, the policeman benchmark, two web spam detection problems, a network intrusion detection problem, and a malware detection problem. It is found that the visualization resulted from an HRSOM provides new insights concerning these learning problems. It is furthermore shown empirically that broad benefits from the use of HRSOMs in both clustering and classification problems can be expected. Copyright © 2018 Elsevier Ltd. All rights reserved.

  12. Particle systems for adaptive, isotropic meshing of CAD models

    PubMed Central

    Levine, Joshua A.; Whitaker, Ross T.

    2012-01-01

    We present a particle-based approach for generating adaptive triangular surface and tetrahedral volume meshes from computer-aided design models. Input shapes are treated as a collection of smooth, parametric surface patches that can meet non-smoothly on boundaries. Our approach uses a hierarchical sampling scheme that places particles on features in order of increasing dimensionality. These particles reach a good distribution by minimizing an energy computed in 3D world space, with movements occurring in the parametric space of each surface patch. Rather than using a pre-computed measure of feature size, our system automatically adapts to both curvature as well as a notion of topological separation. It also enforces a measure of smoothness on these constraints to construct a sizing field that acts as a proxy to piecewise-smooth feature size. We evaluate our technique with comparisons against other popular triangular meshing techniques for this domain. PMID:23162181

  13. Radiometric responsivity determination for Feature Identification and Location Experiment (FILE) flown on space shuttle mission

    NASA Technical Reports Server (NTRS)

    Wilson, R. G.; Davis, R. E.; Wright, R. E., Jr.; Sivertson, W. E., Jr.; Bullock, G. F.

    1986-01-01

    A procedure was developed to obtain the radiometric (radiance) responsivity of the Feature Identification and Local Experiment (FILE) instrument in preparation for its flight on Space Shuttle Mission 41-G (November 1984). This instrument was designed to obtain Earth feature radiance data in spectral bands centered at 0.65 and 0.85 microns, along with corroborative color and color-infrared photographs, and to collect data to evaluate a technique for in-orbit autonomous classification of the Earth's primary features. The calibration process incorporated both solar radiance measurements and radiative transfer model predictions in estimating expected radiance inputs to the FILE on the Shuttle. The measured data are compared with the model predictions, and the differences observed are discussed. Application of the calibration procedure to the FILE over an 18-month period indicated a constant responsivity characteristic. This report documents the calibration procedure and the associated radiometric measurements and predictions that were part of the instrument preparation for flight.

  14. Fuzzy logic algorithm for quantitative tissue characterization of diffuse liver diseases from ultrasound images.

    PubMed

    Badawi, A M; Derbala, A S; Youssef, A M

    1999-08-01

    Computerized ultrasound tissue characterization has become an objective means for diagnosis of liver diseases. It is difficult to differentiate diffuse liver diseases, namely cirrhotic and fatty liver by visual inspection from the ultrasound images. The visual criteria for differentiating diffused diseases are rather confusing and highly dependent upon the sonographer's experience. This often causes a bias effects in the diagnostic procedure and limits its objectivity and reproducibility. Computerized tissue characterization to assist quantitatively the sonographer for the accurate differentiation and to minimize the degree of risk is thus justified. Fuzzy logic has emerged as one of the most active area in classification. In this paper, we present an approach that employs Fuzzy reasoning techniques to automatically differentiate diffuse liver diseases using numerical quantitative features measured from the ultrasound images. Fuzzy rules were generated from over 140 cases consisting of normal, fatty, and cirrhotic livers. The input to the fuzzy system is an eight dimensional vector of feature values: the mean gray level (MGL), the percentile 10%, the contrast (CON), the angular second moment (ASM), the entropy (ENT), the correlation (COR), the attenuation (ATTEN) and the speckle separation. The output of the fuzzy system is one of the three categories: cirrhosis, fatty or normal. The steps done for differentiating the pathologies are data acquisition and feature extraction, dividing the input spaces of the measured quantitative data into fuzzy sets. Based on the expert knowledge, the fuzzy rules are generated and applied using the fuzzy inference procedures to determine the pathology. Different membership functions are developed for the input spaces. This approach has resulted in very good sensitivities and specificity for classifying diffused liver pathologies. This classification technique can be used in the diagnostic process, together with the history information, laboratory, clinical and pathological examinations.

  15. Online Distributed Learning Over Networks in RKH Spaces Using Random Fourier Features

    NASA Astrophysics Data System (ADS)

    Bouboulis, Pantelis; Chouvardas, Symeon; Theodoridis, Sergios

    2018-04-01

    We present a novel diffusion scheme for online kernel-based learning over networks. So far, a major drawback of any online learning algorithm, operating in a reproducing kernel Hilbert space (RKHS), is the need for updating a growing number of parameters as time iterations evolve. Besides complexity, this leads to an increased need of communication resources, in a distributed setting. In contrast, the proposed method approximates the solution as a fixed-size vector (of larger dimension than the input space) using Random Fourier Features. This paves the way to use standard linear combine-then-adapt techniques. To the best of our knowledge, this is the first time that a complete protocol for distributed online learning in RKHS is presented. Conditions for asymptotic convergence and boundness of the networkwise regret are also provided. The simulated tests illustrate the performance of the proposed scheme.

  16. Design of an occulter testbed at flight Fresnel numbers

    NASA Astrophysics Data System (ADS)

    Sirbu, Dan; Kasdin, N. Jeremy; Kim, Yunjong; Vanderbei, Robert J.

    2015-01-01

    An external occulter is a spacecraft flown along the line-of-sight of a space telescope to suppress starlight and enable high-contrast direct imaging of exoplanets. Laboratory verification of occulter designs is necessary to validate the optical models used to design and predict occulter performance. At Princeton, we are designing and building a testbed that allows verification of scaled occulter designs whose suppressed shadow is mathematically identical to that of space occulters. Here, we present a sample design operating at a flight Fresnel number and is thus representative of a realistic space mission. We present calculations of experimental limits arising from the finite size and propagation distance available in the testbed, limitations due to manufacturing feature size, and non-ideal input beam. We demonstrate how the testbed is designed to be feature-size limited, and provide an estimation of the expected performance.

  17. Workshop on Structural Dynamics and Control Interaction of Flexible Structures

    NASA Technical Reports Server (NTRS)

    Davis, L. P.; Wilson, J. F.; Jewell, R. E.

    1987-01-01

    The Hubble Space Telescope features the most exacting line of sight jitter requirement thus far imposed on a spacecraft pointing system. Consideration of the fine pointing requirements prompted an attempt to isolate the telescope from the low level vibration disturbances generated by the attitude control system reaction wheels. The primary goal was to provide isolation from axial component of wheel disturbance without compromising the control system bandwidth. A passive isolation system employing metal springs in parallel with viscous fluid dampers was designed, fabricated, and space qualified. Stiffness and damping characteristics are deterministic, controlled independently, and were demonstrated to remain constant over at least five orders of input disturbance magnitude. The damping remained purely viscous even at the data collection threshold of .16 x .000001 in input displacement, a level much lower than the anticipated Hubble Space Telescope disturbance amplitude. Vibration attenuation goals were obtained and ground test of the vehicle has demonstrated the isolators are transparent to the attitude control system.

  18. Optical implementation of a feature-based neural network with application to automatic target recognition

    NASA Technical Reports Server (NTRS)

    Chao, Tien-Hsin; Stoner, William W.

    1993-01-01

    An optical neural network based on the neocognitron paradigm is introduced. A novel aspect of the architecture design is shift-invariant multichannel Fourier optical correlation within each processing layer. Multilayer processing is achieved by feeding back the ouput of the feature correlator interatively to the input spatial light modulator and by updating the Fourier filters. By training the neural net with characteristic features extracted from the target images, successful pattern recognition with intraclass fault tolerance and interclass discrimination is achieved. A detailed system description is provided. Experimental demonstrations of a two-layer neural network for space-object discrimination is also presented.

  19. Automatic target recognition using a feature-based optical neural network

    NASA Technical Reports Server (NTRS)

    Chao, Tien-Hsin

    1992-01-01

    An optical neural network based upon the Neocognitron paradigm (K. Fukushima et al. 1983) is introduced. A novel aspect of the architectural design is shift-invariant multichannel Fourier optical correlation within each processing layer. Multilayer processing is achieved by iteratively feeding back the output of the feature correlator to the input spatial light modulator and updating the Fourier filters. By training the neural net with characteristic features extracted from the target images, successful pattern recognition with intra-class fault tolerance and inter-class discrimination is achieved. A detailed system description is provided. Experimental demonstration of a two-layer neural network for space objects discrimination is also presented.

  20. Arrows as anchors: An analysis of the material features of electric field vector arrows

    NASA Astrophysics Data System (ADS)

    Gire, Elizabeth; Price, Edward

    2014-12-01

    Representations in physics possess both physical and conceptual aspects that are fundamentally intertwined and can interact to support or hinder sense making and computation. We use distributed cognition and the theory of conceptual blending with material anchors to interpret the roles of conceptual and material features of representations in students' use of representations for computation. We focus on the vector-arrows representation of electric fields and describe this representation as a conceptual blend of electric field concepts, physical space, and the material features of the representation (i.e., the physical writing and the surface upon which it is drawn). In this representation, spatial extent (e.g., distance on paper) is used to represent both distances in coordinate space and magnitudes of electric field vectors. In conceptual blending theory, this conflation is described as a clash between the input spaces in the blend. We explore the benefits and drawbacks of this clash, as well as other features of this representation. This analysis is illustrated with examples from clinical problem-solving interviews with upper-division physics majors. We see that while these intermediate physics students make a variety of errors using this representation, they also use the geometric features of the representation to add electric field contributions and to organize the problem situation productively.

  1. Coupled multiview autoencoders with locality sensitivity for three-dimensional human pose estimation

    NASA Astrophysics Data System (ADS)

    Yu, Jialin; Sun, Jifeng; Luo, Shasha; Duan, Bichao

    2017-09-01

    Estimating three-dimensional (3D) human poses from a single camera is usually implemented by searching pose candidates with image descriptors. Existing methods usually suppose that the mapping from feature space to pose space is linear, but in fact, their mapping relationship is highly nonlinear, which heavily degrades the performance of 3D pose estimation. We propose a method to recover 3D pose from a silhouette image. It is based on the multiview feature embedding (MFE) and the locality-sensitive autoencoders (LSAEs). On the one hand, we first depict the manifold regularized sparse low-rank approximation for MFE and then the input image is characterized by a fused feature descriptor. On the other hand, both the fused feature and its corresponding 3D pose are separately encoded by LSAEs. A two-layer back-propagation neural network is trained by parameter fine-tuning and then used to map the encoded 2D features to encoded 3D poses. Our LSAE ensures a good preservation of the local topology of data points. Experimental results demonstrate the effectiveness of our proposed method.

  2. DISCOS- Current Status and Future Developments

    NASA Astrophysics Data System (ADS)

    Flohrer, T.; Lemmens, S.; Bastida Virgili, B.; Krag, H.; Klinkrad, H.; Parrilla, E.; Sanchez, N.; Oliveira, J.; Pina, F.

    2013-08-01

    We present ESA's Database and Information System Characterizing Objects in Space (DISCOS). DISCOS not only plays an essential role in the collision avoidance and re-entry prediction services provided by ESA's Space Debris Office, it is also providing input to numerous and very differently scoped engineering activities, within ESA and throughout industry. We introduce the central functionalities of DISCOS, present the available reporting capabilities, and describe selected data modelling features. Finally, we revisit the developments of the recent years and take a sneak preview of the on-going replacement of DISCOS web front-end.

  3. A Feedback Model of Attention Explains the Diverse Effects of Attention on Neural Firing Rates and Receptive Field Structure.

    PubMed

    Miconi, Thomas; VanRullen, Rufin

    2016-02-01

    Visual attention has many effects on neural responses, producing complex changes in firing rates, as well as modifying the structure and size of receptive fields, both in topological and feature space. Several existing models of attention suggest that these effects arise from selective modulation of neural inputs. However, anatomical and physiological observations suggest that attentional modulation targets higher levels of the visual system (such as V4 or MT) rather than input areas (such as V1). Here we propose a simple mechanism that explains how a top-down attentional modulation, falling on higher visual areas, can produce the observed effects of attention on neural responses. Our model requires only the existence of modulatory feedback connections between areas, and short-range lateral inhibition within each area. Feedback connections redistribute the top-down modulation to lower areas, which in turn alters the inputs of other higher-area cells, including those that did not receive the initial modulation. This produces firing rate modulations and receptive field shifts. Simultaneously, short-range lateral inhibition between neighboring cells produce competitive effects that are automatically scaled to receptive field size in any given area. Our model reproduces the observed attentional effects on response rates (response gain, input gain, biased competition automatically scaled to receptive field size) and receptive field structure (shifts and resizing of receptive fields both spatially and in complex feature space), without modifying model parameters. Our model also makes the novel prediction that attentional effects on response curves should shift from response gain to contrast gain as the spatial focus of attention drifts away from the studied cell.

  4. The Cutplane - A tool for interactive solid modeling

    NASA Technical Reports Server (NTRS)

    Edwards, Laurence; Kessler, William; Leifer, Larry

    1988-01-01

    A geometric modeling system which incorporates a new concept for intuitively and unambiguously specifying and manipulating points or features in three dimensional space is presented. The central concept, the Cutplane, consists of a plane that moves through space under control of a mouse or similar input device. The intersection of the plane and any object is highlighted, and only this highlighted section can be selected for manipulation. Selection is accomplished with a crosshair that is constrained to remain within the plane, so that the relationship between the crosshair and the feature of interest is immediately evident. Although the idea of a section view is not new, previously it has been used as a way to reveal hidden structure, not as a means of manipulating objects or indicating spatial position, as is proposed here.

  5. Kernel-Based Relevance Analysis with Enhanced Interpretability for Detection of Brain Activity Patterns

    PubMed Central

    Alvarez-Meza, Andres M.; Orozco-Gutierrez, Alvaro; Castellanos-Dominguez, German

    2017-01-01

    We introduce Enhanced Kernel-based Relevance Analysis (EKRA) that aims to support the automatic identification of brain activity patterns using electroencephalographic recordings. EKRA is a data-driven strategy that incorporates two kernel functions to take advantage of the available joint information, associating neural responses to a given stimulus condition. Regarding this, a Centered Kernel Alignment functional is adjusted to learning the linear projection that best discriminates the input feature set, optimizing the required free parameters automatically. Our approach is carried out in two scenarios: (i) feature selection by computing a relevance vector from extracted neural features to facilitating the physiological interpretation of a given brain activity task, and (ii) enhanced feature selection to perform an additional transformation of relevant features aiming to improve the overall identification accuracy. Accordingly, we provide an alternative feature relevance analysis strategy that allows improving the system performance while favoring the data interpretability. For the validation purpose, EKRA is tested in two well-known tasks of brain activity: motor imagery discrimination and epileptic seizure detection. The obtained results show that the EKRA approach estimates a relevant representation space extracted from the provided supervised information, emphasizing the salient input features. As a result, our proposal outperforms the state-of-the-art methods regarding brain activity discrimination accuracy with the benefit of enhanced physiological interpretation about the task at hand. PMID:29056897

  6. Three-dimensional object recognition using similar triangles and decision trees

    NASA Technical Reports Server (NTRS)

    Spirkovska, Lilly

    1993-01-01

    A system, TRIDEC, that is capable of distinguishing between a set of objects despite changes in the objects' positions in the input field, their size, or their rotational orientation in 3D space is described. TRIDEC combines very simple yet effective features with the classification capabilities of inductive decision tree methods. The feature vector is a list of all similar triangles defined by connecting all combinations of three pixels in a coarse coded 127 x 127 pixel input field. The classification is accomplished by building a decision tree using the information provided from a limited number of translated, scaled, and rotated samples. Simulation results are presented which show that TRIDEC achieves 94 percent recognition accuracy in the 2D invariant object recognition domain and 98 percent recognition accuracy in the 3D invariant object recognition domain after training on only a small sample of transformed views of the objects.

  7. Self-organizing neural integration of pose-motion features for human action recognition

    PubMed Central

    Parisi, German I.; Weber, Cornelius; Wermter, Stefan

    2015-01-01

    The visual recognition of complex, articulated human movements is fundamental for a wide range of artificial systems oriented toward human-robot communication, action classification, and action-driven perception. These challenging tasks may generally involve the processing of a huge amount of visual information and learning-based mechanisms for generalizing a set of training actions and classifying new samples. To operate in natural environments, a crucial property is the efficient and robust recognition of actions, also under noisy conditions caused by, for instance, systematic sensor errors and temporarily occluded persons. Studies of the mammalian visual system and its outperforming ability to process biological motion information suggest separate neural pathways for the distinct processing of pose and motion features at multiple levels and the subsequent integration of these visual cues for action perception. We present a neurobiologically-motivated approach to achieve noise-tolerant action recognition in real time. Our model consists of self-organizing Growing When Required (GWR) networks that obtain progressively generalized representations of sensory inputs and learn inherent spatio-temporal dependencies. During the training, the GWR networks dynamically change their topological structure to better match the input space. We first extract pose and motion features from video sequences and then cluster actions in terms of prototypical pose-motion trajectories. Multi-cue trajectories from matching action frames are subsequently combined to provide action dynamics in the joint feature space. Reported experiments show that our approach outperforms previous results on a dataset of full-body actions captured with a depth sensor, and ranks among the best results for a public benchmark of domestic daily actions. PMID:26106323

  8. A programmable power processor for high power space applications

    NASA Technical Reports Server (NTRS)

    Lanier, J. R., Jr.; Graves, J. R.; Kapustka, R. E.; Bush, J. R., Jr.

    1982-01-01

    A Programmable Power Processor (P3) has been developed for application in future large space power systems. The P3 is capable of operation over a wide range of input voltage (26 to 375 Vdc) and output voltage (24 to 180 Vdc). The peak output power capability is 18 kW (180 V at 100 A). The output characteristics of the P3 can be programmed to any voltage and/or current level within the limits of the processor and may be controlled as a function of internal or external parameters. Seven breadboard P3s and one 'flight-type' engineering model P3 have been built and tested both individually and in electrical power systems. The programmable feature allows the P3 to be used in a variety of applications by changing the output characteristics. Test results, including efficiency at various input/output combinations, transient response, and output impedance, are presented.

  9. Direct adaptive control of manipulators in Cartesian space

    NASA Technical Reports Server (NTRS)

    Seraji, H.

    1987-01-01

    A new adaptive-control scheme for direct control of manipulator end effector to achieve trajectory tracking in Cartesian space is developed in this article. The control structure is obtained from linear multivariable theory and is composed of simple feedforward and feedback controllers and an auxiliary input. The direct adaptation laws are derived from model reference adaptive control theory and are not based on parameter estimation of the robot model. The utilization of adaptive feedforward control and the inclusion of auxiliary input are novel features of the present scheme and result in improved dynamic performance over existing adaptive control schemes. The adaptive controller does not require the complex mathematical model of the robot dynamics or any knowledge of the robot parameters or the payload, and is computationally fast for on-line implementation with high sampling rates. The control scheme is applied to a two-link manipulator for illustration.

  10. The space station: Human factors and productivity

    NASA Technical Reports Server (NTRS)

    Gillan, D. J.; Burns, M. J.; Nicodemus, C. L.; Smith, R. L.

    1986-01-01

    Human factor researchers and engineers are making inputs into the early stages of the design of the Space Station to improve both the quality of life and work on-orbit. Effective integration of the human factors information related to various Intravehicular Activity (IVA), Extravehicular Activity (EVA), and teletobotics systems during the Space Station design will result in increased productivity, increased flexibility of the Space Stations systems, lower cost of operations, improved reliability, and increased safety for the crew onboard the Space Station. The major features of productivity examined include the cognitive and physical effort involved in work, the accuracy of worker output and ability to maintain performance at a high level of accuracy, the speed and temporal efficiency with which a worker performs, crewmember satisfaction with their work environment, and the relation between performance and cost.

  11. Rock images classification by using deep convolution neural network

    NASA Astrophysics Data System (ADS)

    Cheng, Guojian; Guo, Wenhui

    2017-08-01

    Granularity analysis is one of the most essential issues in authenticate under microscope. To improve the efficiency and accuracy of traditional manual work, an convolutional neural network based method is proposed for granularity analysis from thin section image, which chooses and extracts features from image samples while build classifier to recognize granularity of input image samples. 4800 samples from Ordos basin are used for experiments under colour spaces of HSV, YCbCr and RGB respectively. On the test dataset, the correct rate in RGB colour space is 98.5%, and it is believable in HSV and YCbCr colour space. The results show that the convolution neural network can classify the rock images with high reliability.

  12. Biotelemetry and computer analysis of sleep processes on earth and in space.

    NASA Technical Reports Server (NTRS)

    Adey, W. R.

    1972-01-01

    Developments in biomedical engineering now permit study of states of sleep, wakefulness, and focused attention in man exposed to rigorous environments, including aerospace flight. These new sensing devices, data acquisition systems, and computational methods have also been extensively applied to clinical problems of disordered sleep. A 'library' of EEG data has been prepared for sleep in normal man, and characterized for its group features by computational analysis. Sleep in an astronaut in space flight has been examined for the first and second 'nights' of space flight. Normal 90-min cycles were detected during the second night. Sleep patterns in quadriplegic patients deprived of all sensory inputs below the neck have indicated major deviations.

  13. Dynamics of feature categorization.

    PubMed

    Martí, Daniel; Rinzel, John

    2013-01-01

    In visual and auditory scenes, we are able to identify shared features among sensory objects and group them according to their similarity. This grouping is preattentive and fast and is thought of as an elementary form of categorization by which objects sharing similar features are clustered in some abstract perceptual space. It is unclear what neuronal mechanisms underlie this fast categorization. Here we propose a neuromechanistic model of fast feature categorization based on the framework of continuous attractor networks. The mechanism for category formation does not rely on learning and is based on biologically plausible assumptions, for example, the existence of populations of neurons tuned to feature values, feature-specific interactions, and subthreshold-evoked responses upon the presentation of single objects. When the network is presented with a sequence of stimuli characterized by some feature, the network sums the evoked responses and provides a running estimate of the distribution of features in the input stream. If the distribution of features is structured into different components or peaks (i.e., is multimodal), recurrent excitation amplifies the response of activated neurons, and categories are singled out as emerging localized patterns of elevated neuronal activity (bumps), centered at the centroid of each cluster. The emergence of bump states through sequential, subthreshold activation and the dependence on input statistics is a novel application of attractor networks. We show that the extraction and representation of multiple categories are facilitated by the rich attractor structure of the network, which can sustain multiple stable activity patterns for a robust range of connectivity parameters compatible with cortical physiology.

  14. Model, analysis, and evaluation of the effects of analog VLSI arithmetic on linear subspace-based image recognition.

    PubMed

    Carvajal, Gonzalo; Figueroa, Miguel

    2014-07-01

    Typical image recognition systems operate in two stages: feature extraction to reduce the dimensionality of the input space, and classification based on the extracted features. Analog Very Large Scale Integration (VLSI) is an attractive technology to achieve compact and low-power implementations of these computationally intensive tasks for portable embedded devices. However, device mismatch limits the resolution of the circuits fabricated with this technology. Traditional layout techniques to reduce the mismatch aim to increase the resolution at the transistor level, without considering the intended application. Relating mismatch parameters to specific effects in the application level would allow designers to apply focalized mismatch compensation techniques according to predefined performance/cost tradeoffs. This paper models, analyzes, and evaluates the effects of mismatched analog arithmetic in both feature extraction and classification circuits. For the feature extraction, we propose analog adaptive linear combiners with on-chip learning for both Least Mean Square (LMS) and Generalized Hebbian Algorithm (GHA). Using mathematical abstractions of analog circuits, we identify mismatch parameters that are naturally compensated during the learning process, and propose cost-effective guidelines to reduce the effect of the rest. For the classification, we derive analog models for the circuits necessary to implement Nearest Neighbor (NN) approach and Radial Basis Function (RBF) networks, and use them to emulate analog classifiers with standard databases of face and hand-writing digits. Formal analysis and experiments show how we can exploit adaptive structures and properties of the input space to compensate the effects of device mismatch at the application level, thus reducing the design overhead of traditional layout techniques. Results are also directly extensible to multiple application domains using linear subspace methods. Copyright © 2014 Elsevier Ltd. All rights reserved.

  15. Inference of segmented color and texture description by tensor voting.

    PubMed

    Jia, Jiaya; Tang, Chi-Keung

    2004-06-01

    A robust synthesis method is proposed to automatically infer missing color and texture information from a damaged 2D image by (N)D tensor voting (N > 3). The same approach is generalized to range and 3D data in the presence of occlusion, missing data and noise. Our method translates texture information into an adaptive (N)D tensor, followed by a voting process that infers noniteratively the optimal color values in the (N)D texture space. A two-step method is proposed. First, we perform segmentation based on insufficient geometry, color, and texture information in the input, and extrapolate partitioning boundaries by either 2D or 3D tensor voting to generate a complete segmentation for the input. Missing colors are synthesized using (N)D tensor voting in each segment. Different feature scales in the input are automatically adapted by our tensor scale analysis. Results on a variety of difficult inputs demonstrate the effectiveness of our tensor voting approach.

  16. Advanced torque converters for robotics and space applications

    NASA Technical Reports Server (NTRS)

    1985-01-01

    This report describes the results of the evaluation of a novel torque converter concept. Features of the concept include: (1) automatic and rapid adjustment of effective gear ratio in response to changes in external torque (2) maintenance of output torque at zero output velocity without loading the input power source and (3) isolation of input power source from load. Two working models of the concept were fabricated and tested, and a theoretical analysis was performed to determine the limits of performance. It was found that the devices are apparently suited to certain types of tool driver applications, such as screwdrivers, nut drivers and valve actuators. However, quantiative information was insufficient to draw final conclusion as to robotic applications.

  17. Improvements in sparse matrix operations of NASTRAN

    NASA Technical Reports Server (NTRS)

    Harano, S.

    1980-01-01

    A "nontransmit" packing routine was added to NASTRAN to allow matrix data to be refered to directly from the input/output buffer. Use of the packing routine permits various routines for matrix handling to perform a direct reference to the input/output buffer if data addresses have once been received. The packing routine offers a buffer by buffer backspace feature for efficient backspacing in sequential access. Unlike a conventional backspacing that needs twice back record for a single read of one record (one column), this feature omits overlapping of READ operation and back record. It eliminates the necessity of writing, in decomposition of a symmetric matrix, of a portion of the matrix to its upper triangular matrix from the last to the first columns of the symmetric matrix, thus saving time for generating the upper triangular matrix. Only a lower triangular matrix must be written onto the secondary storage device, bringing 10 to 30% reduction in use of the disk space of the storage device.

  18. Modified DCTNet for audio signals classification

    NASA Astrophysics Data System (ADS)

    Xian, Yin; Pu, Yunchen; Gan, Zhe; Lu, Liang; Thompson, Andrew

    2016-10-01

    In this paper, we investigate DCTNet for audio signal classification. Its output feature is related to Cohen's class of time-frequency distributions. We introduce the use of adaptive DCTNet (A-DCTNet) for audio signals feature extraction. The A-DCTNet applies the idea of constant-Q transform, with its center frequencies of filterbanks geometrically spaced. The A-DCTNet is adaptive to different acoustic scales, and it can better capture low frequency acoustic information that is sensitive to human audio perception than features such as Mel-frequency spectral coefficients (MFSC). We use features extracted by the A-DCTNet as input for classifiers. Experimental results show that the A-DCTNet and Recurrent Neural Networks (RNN) achieve state-of-the-art performance in bird song classification rate, and improve artist identification accuracy in music data. They demonstrate A-DCTNet's applicability to signal processing problems.

  19. Dialect Distance Assessment Based on 2-Dimensional Pitch Slope Features and Kullback Leibler Divergence

    DTIC Science & Technology

    2009-04-08

    to changes on input data is quantified. It is also shown in a perceptive evaluation that the presented objective approach of dialect distance...of Arabic dialects are discussed. We also show the repeatability of presented mea- sure, and its correlation with human perception . Conclusions are...in the strict sense of metric spaces. PREPRINT 1 2. Proposed Method Human perception tests indicate that prosodic cues, including pitch movements

  20. Probabilistic Methods for Uncertainty Propagation Applied to Aircraft Design

    NASA Technical Reports Server (NTRS)

    Green, Lawrence L.; Lin, Hong-Zong; Khalessi, Mohammad R.

    2002-01-01

    Three methods of probabilistic uncertainty propagation and quantification (the method of moments, Monte Carlo simulation, and a nongradient simulation search method) are applied to an aircraft analysis and conceptual design program to demonstrate design under uncertainty. The chosen example problems appear to have discontinuous design spaces and thus these examples pose difficulties for many popular methods of uncertainty propagation and quantification. However, specific implementation features of the first and third methods chosen for use in this study enable successful propagation of small uncertainties through the program. Input uncertainties in two configuration design variables are considered. Uncertainties in aircraft weight are computed. The effects of specifying required levels of constraint satisfaction with specified levels of input uncertainty are also demonstrated. The results show, as expected, that the designs under uncertainty are typically heavier and more conservative than those in which no input uncertainties exist.

  1. Sharpening of Hierarchical Visual Feature Representations of Blurred Images.

    PubMed

    Abdelhack, Mohamed; Kamitani, Yukiyasu

    2018-01-01

    The robustness of the visual system lies in its ability to perceive degraded images. This is achieved through interacting bottom-up, recurrent, and top-down pathways that process the visual input in concordance with stored prior information. The interaction mechanism by which they integrate visual input and prior information is still enigmatic. We present a new approach using deep neural network (DNN) representation to reveal the effects of such integration on degraded visual inputs. We transformed measured human brain activity resulting from viewing blurred images to the hierarchical representation space derived from a feedforward DNN. Transformed representations were found to veer toward the original nonblurred image and away from the blurred stimulus image. This indicated deblurring or sharpening in the neural representation, and possibly in our perception. We anticipate these results will help unravel the interplay mechanism between bottom-up, recurrent, and top-down pathways, leading to more comprehensive models of vision.

  2. Symbolic Computation Using Cellular Automata-Based Hyperdimensional Computing.

    PubMed

    Yilmaz, Ozgur

    2015-12-01

    This letter introduces a novel framework of reservoir computing that is capable of both connectionist machine intelligence and symbolic computation. A cellular automaton is used as the reservoir of dynamical systems. Input is randomly projected onto the initial conditions of automaton cells, and nonlinear computation is performed on the input via application of a rule in the automaton for a period of time. The evolution of the automaton creates a space-time volume of the automaton state space, and it is used as the reservoir. The proposed framework is shown to be capable of long-term memory, and it requires orders of magnitude less computation compared to echo state networks. As the focus of the letter, we suggest that binary reservoir feature vectors can be combined using Boolean operations as in hyperdimensional computing, paving a direct way for concept building and symbolic processing. To demonstrate the capability of the proposed system, we make analogies directly on image data by asking, What is the automobile of air?

  3. Statistical Hypothesis Testing using CNN Features for Synthesis of Adversarial Counterexamples to Human and Object Detection Vision Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Raj, Sunny; Jha, Sumit Kumar; Pullum, Laura L.

    Validating the correctness of human detection vision systems is crucial for safety applications such as pedestrian collision avoidance in autonomous vehicles. The enormous space of possible inputs to such an intelligent system makes it difficult to design test cases for such systems. In this report, we present our tool MAYA that uses an error model derived from a convolutional neural network (CNN) to explore the space of images similar to a given input image, and then tests the correctness of a given human or object detection system on such perturbed images. We demonstrate the capability of our tool on themore » pre-trained Histogram-of-Oriented-Gradients (HOG) human detection algorithm implemented in the popular OpenCV toolset and the Caffe object detection system pre-trained on the ImageNet benchmark. Our tool may serve as a testing resource for the designers of intelligent human and object detection systems.« less

  4. Segmentation by fusion of histogram-based k-means clusters in different color spaces.

    PubMed

    Mignotte, Max

    2008-05-01

    This paper presents a new, simple, and efficient segmentation approach, based on a fusion procedure which aims at combining several segmentation maps associated to simpler partition models in order to finally get a more reliable and accurate segmentation result. The different label fields to be fused in our application are given by the same and simple (K-means based) clustering technique on an input image expressed in different color spaces. Our fusion strategy aims at combining these segmentation maps with a final clustering procedure using as input features, the local histogram of the class labels, previously estimated and associated to each site and for all these initial partitions. This fusion framework remains simple to implement, fast, general enough to be applied to various computer vision applications (e.g., motion detection and segmentation), and has been successfully applied on the Berkeley image database. The experiments herein reported in this paper illustrate the potential of this approach compared to the state-of-the-art segmentation methods recently proposed in the literature.

  5. Deep neural networks for texture classification-A theoretical analysis.

    PubMed

    Basu, Saikat; Mukhopadhyay, Supratik; Karki, Manohar; DiBiano, Robert; Ganguly, Sangram; Nemani, Ramakrishna; Gayaka, Shreekant

    2018-01-01

    We investigate the use of Deep Neural Networks for the classification of image datasets where texture features are important for generating class-conditional discriminative representations. To this end, we first derive the size of the feature space for some standard textural features extracted from the input dataset and then use the theory of Vapnik-Chervonenkis dimension to show that hand-crafted feature extraction creates low-dimensional representations which help in reducing the overall excess error rate. As a corollary to this analysis, we derive for the first time upper bounds on the VC dimension of Convolutional Neural Network as well as Dropout and Dropconnect networks and the relation between excess error rate of Dropout and Dropconnect networks. The concept of intrinsic dimension is used to validate the intuition that texture-based datasets are inherently higher dimensional as compared to handwritten digits or other object recognition datasets and hence more difficult to be shattered by neural networks. We then derive the mean distance from the centroid to the nearest and farthest sampling points in an n-dimensional manifold and show that the Relative Contrast of the sample data vanishes as dimensionality of the underlying vector space tends to infinity. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Radiation environment study of near space in China area

    NASA Astrophysics Data System (ADS)

    Fan, Dongdong; Chen, Xingfeng; Li, Zhengqiang; Mei, Xiaodong

    2015-10-01

    Aerospace activity becomes research hotspot for worldwide aviation big countries. Solar radiation study is the prerequisite for aerospace activity to carry out, but lack of observation in near space layer becomes the barrier. Based on reanalysis data, input key parameters are determined and simulation experiments are tried separately to simulate downward solar radiation and ultraviolet radiation transfer process of near space in China area. Results show that atmospheric influence on the solar radiation and ultraviolet radiation transfer process has regional characteristic. As key factors such as ozone are affected by atmospheric action both on its density, horizontal and vertical distribution, meteorological data of stratosphere needs to been considered and near space in China area is divided by its activity feature. Simulated results show that solar and ultraviolet radiation is time, latitude and ozone density-variant and has complicated variation characteristics.

  7. A Deep Convolutional Coupling Network for Change Detection Based on Heterogeneous Optical and Radar Images.

    PubMed

    Liu, Jia; Gong, Maoguo; Qin, Kai; Zhang, Puzhao

    2018-03-01

    We propose an unsupervised deep convolutional coupling network for change detection based on two heterogeneous images acquired by optical sensors and radars on different dates. Most existing change detection methods are based on homogeneous images. Due to the complementary properties of optical and radar sensors, there is an increasing interest in change detection based on heterogeneous images. The proposed network is symmetric with each side consisting of one convolutional layer and several coupling layers. The two input images connected with the two sides of the network, respectively, are transformed into a feature space where their feature representations become more consistent. In this feature space, the different map is calculated, which then leads to the ultimate detection map by applying a thresholding algorithm. The network parameters are learned by optimizing a coupling function. The learning process is unsupervised, which is different from most existing change detection methods based on heterogeneous images. Experimental results on both homogenous and heterogeneous images demonstrate the promising performance of the proposed network compared with several existing approaches.

  8. Implementation of input command shaping to reduce vibration in flexible space structures

    NASA Technical Reports Server (NTRS)

    Chang, Kenneth W.; Seering, Warren P.; Rappole, B. Whitney

    1992-01-01

    Viewgraphs on implementation of input command shaping to reduce vibration in flexible space structures are presented. Goals of the research are to explore theory of input command shaping to find an efficient algorithm for flexible space structures; to characterize Middeck Active Control Experiment (MACE) test article; and to implement input shaper on the MACE structure and interpret results. Background on input shaping, simulation results, experimental results, and future work are included.

  9. Classification of posture maintenance data with fuzzy clustering algorithms

    NASA Technical Reports Server (NTRS)

    Bezdek, James C.

    1992-01-01

    Sensory inputs from the visual, vestibular, and proprioreceptive systems are integrated by the central nervous system to maintain postural equilibrium. Sustained exposure to microgravity causes neurosensory adaptation during spaceflight, which results in decreased postural stability until readaptation occurs upon return to the terrestrial environment. Data which simulate sensory inputs under various sensory organization test (SOT) conditions were collected in conjunction with Johnson Space Center postural control studies using a tilt-translation device (TTD). The University of West Florida applied the fuzzy c-meams (FCM) clustering algorithms to this data with a view towards identifying various states and stages of subjects experiencing such changes. Feature analysis, time step analysis, pooling data, response of the subjects, and the algorithms used are discussed.

  10. A parametric multiclass Bayes error estimator for the multispectral scanner spatial model performance evaluation

    NASA Technical Reports Server (NTRS)

    Mobasseri, B. G.; Mcgillem, C. D.; Anuta, P. E. (Principal Investigator)

    1978-01-01

    The author has identified the following significant results. The probability of correct classification of various populations in data was defined as the primary performance index. The multispectral data being of multiclass nature as well, required a Bayes error estimation procedure that was dependent on a set of class statistics alone. The classification error was expressed in terms of an N dimensional integral, where N was the dimensionality of the feature space. The multispectral scanner spatial model was represented by a linear shift, invariant multiple, port system where the N spectral bands comprised the input processes. The scanner characteristic function, the relationship governing the transformation of the input spatial, and hence, spectral correlation matrices through the systems, was developed.

  11. Phytoplankton global mapping from space with a support vector machine algorithm

    NASA Astrophysics Data System (ADS)

    de Boissieu, Florian; Menkes, Christophe; Dupouy, Cécile; Rodier, Martin; Bonnet, Sophie; Mangeas, Morgan; Frouin, Robert J.

    2014-11-01

    In recent years great progress has been made in global mapping of phytoplankton from space. Two main trends have emerged, the recognition of phytoplankton functional types (PFT) based on reflectance normalized to chlorophyll-a concentration, and the recognition of phytoplankton size class (PSC) based on the relationship between cell size and chlorophyll-a concentration. However, PFTs and PSCs are not decorrelated, and one approach can complement the other in a recognition task. In this paper, we explore the recognition of several dominant PFTs by combining reflectance anomalies, chlorophyll-a concentration and other environmental parameters, such as sea surface temperature and wind speed. Remote sensing pixels are labeled thanks to coincident in-situ pigment data from GeP&CO, NOMAD and MAREDAT datasets, covering various oceanographic environments. The recognition is made with a supervised Support Vector Machine classifier trained on the labeled pixels. This algorithm enables a non-linear separation of the classes in the input space and is especially adapted for small training datasets as available here. Moreover, it provides a class probability estimate, allowing one to enhance the robustness of the classification results through the choice of a minimum probability threshold. A greedy feature selection associated to a 10-fold cross-validation procedure is applied to select the most discriminative input features and evaluate the classification performance. The best classifiers are finally applied on daily remote sensing datasets (SeaWIFS, MODISA) and the resulting dominant PFT maps are compared with other studies. Several conclusions are drawn: (1) the feature selection highlights the weight of temperature, chlorophyll-a and wind speed variables in phytoplankton recognition; (2) the classifiers show good results and dominant PFT maps in agreement with phytoplankton distribution knowledge; (3) classification on MODISA data seems to perform better than on SeaWIFS data, (4) the probability threshold screens correctly the areas of smallest confidence such as the interclass regions.

  12. Stream ecosystems change with urban development

    USGS Publications Warehouse

    Bell, Amanda H.; James, F. Coles; McMahon, Gerard

    2012-01-01

    The healthy condition of the physical living space in a natural stream—defined by unaltered hydrology (streamflow), high diversity of habitat features, and natural water chemistry—supports diverse biological communities with aquatic species that are sensitive to disturbances. In a highly degraded urban stream, the poor condition of the physical living space—streambank and tree root damage from altered hydrology, low diversity of habitat, and inputs of chemical contaminants—contributes to biological communities with low diversity and high tolerance to disturbance.

  13. Exploring nonlinear feature space dimension reduction and data representation in breast Cadx with Laplacian eigenmaps and t-SNE.

    PubMed

    Jamieson, Andrew R; Giger, Maryellen L; Drukker, Karen; Li, Hui; Yuan, Yading; Bhooshan, Neha

    2010-01-01

    In this preliminary study, recently developed unsupervised nonlinear dimension reduction (DR) and data representation techniques were applied to computer-extracted breast lesion feature spaces across three separate imaging modalities: Ultrasound (U.S.) with 1126 cases, dynamic contrast enhanced magnetic resonance imaging with 356 cases, and full-field digital mammography with 245 cases. Two methods for nonlinear DR were explored: Laplacian eigenmaps [M. Belkin and P. Niyogi, "Laplacian eigenmaps for dimensionality reduction and data representation," Neural Comput. 15, 1373-1396 (2003)] and t-distributed stochastic neighbor embedding (t-SNE) [L. van der Maaten and G. Hinton, "Visualizing data using t-SNE," J. Mach. Learn. Res. 9, 2579-2605 (2008)]. These methods attempt to map originally high dimensional feature spaces to more human interpretable lower dimensional spaces while preserving both local and global information. The properties of these methods as applied to breast computer-aided diagnosis (CADx) were evaluated in the context of malignancy classification performance as well as in the visual inspection of the sparseness within the two-dimensional and three-dimensional mappings. Classification performance was estimated by using the reduced dimension mapped feature output as input into both linear and nonlinear classifiers: Markov chain Monte Carlo based Bayesian artificial neural network (MCMC-BANN) and linear discriminant analysis. The new techniques were compared to previously developed breast CADx methodologies, including automatic relevance determination and linear stepwise (LSW) feature selection, as well as a linear DR method based on principal component analysis. Using ROC analysis and 0.632+bootstrap validation, 95% empirical confidence intervals were computed for the each classifier's AUC performance. In the large U.S. data set, sample high performance results include, AUC0.632+ = 0.88 with 95% empirical bootstrap interval [0.787;0.895] for 13 ARD selected features and AUC0.632+ = 0.87 with interval [0.817;0.906] for four LSW selected features compared to 4D t-SNE mapping (from the original 81D feature space) giving AUC0.632+ = 0.90 with interval [0.847;0.919], all using the MCMC-BANN. Preliminary results appear to indicate capability for the new methods to match or exceed classification performance of current advanced breast lesion CADx algorithms. While not appropriate as a complete replacement of feature selection in CADx problems, DR techniques offer a complementary approach, which can aid elucidation of additional properties associated with the data. Specifically, the new techniques were shown to possess the added benefit of delivering sparse lower dimensional representations for visual interpretation, revealing intricate data structure of the feature space.

  14. Density-Dependent Quantized Least Squares Support Vector Machine for Large Data Sets.

    PubMed

    Nan, Shengyu; Sun, Lei; Chen, Badong; Lin, Zhiping; Toh, Kar-Ann

    2017-01-01

    Based on the knowledge that input data distribution is important for learning, a data density-dependent quantization scheme (DQS) is proposed for sparse input data representation. The usefulness of the representation scheme is demonstrated by using it as a data preprocessing unit attached to the well-known least squares support vector machine (LS-SVM) for application on big data sets. Essentially, the proposed DQS adopts a single shrinkage threshold to obtain a simple quantization scheme, which adapts its outputs to input data density. With this quantization scheme, a large data set is quantized to a small subset where considerable sample size reduction is generally obtained. In particular, the sample size reduction can save significant computational cost when using the quantized subset for feature approximation via the Nyström method. Based on the quantized subset, the approximated features are incorporated into LS-SVM to develop a data density-dependent quantized LS-SVM (DQLS-SVM), where an analytic solution is obtained in the primal solution space. The developed DQLS-SVM is evaluated on synthetic and benchmark data with particular emphasis on large data sets. Extensive experimental results show that the learning machine incorporating DQS attains not only high computational efficiency but also good generalization performance.

  15. Low-Rank Discriminant Embedding for Multiview Learning.

    PubMed

    Li, Jingjing; Wu, Yue; Zhao, Jidong; Lu, Ke

    2017-11-01

    This paper focuses on the specific problem of multiview learning where samples have the same feature set but different probability distributions, e.g., different viewpoints or different modalities. Since samples lying in different distributions cannot be compared directly, this paper aims to learn a latent subspace shared by multiple views assuming that the input views are generated from this latent subspace. Previous approaches usually learn the common subspace by either maximizing the empirical likelihood, or preserving the geometric structure. However, considering the complementarity between the two objectives, this paper proposes a novel approach, named low-rank discriminant embedding (LRDE), for multiview learning by taking full advantage of both sides. By further considering the duality between data points and features of multiview scene, i.e., data points can be grouped based on their distribution on features, while features can be grouped based on their distribution on the data points, LRDE not only deploys low-rank constraints on both sample level and feature level to dig out the shared factors across different views, but also preserves geometric information in both the ambient sample space and the embedding feature space by designing a novel graph structure under the framework of graph embedding. Finally, LRDE jointly optimizes low-rank representation and graph embedding in a unified framework. Comprehensive experiments in both multiview manner and pairwise manner demonstrate that LRDE performs much better than previous approaches proposed in recent literatures.

  16. Diagnostic support for glaucoma using retinal images: a hybrid image analysis and data mining approach.

    PubMed

    Yu, Jin; Abidi, Syed Sibte Raza; Artes, Paul; McIntyre, Andy; Heywood, Malcolm

    2005-01-01

    The availability of modern imaging techniques such as Confocal Scanning Laser Tomography (CSLT) for capturing high-quality optic nerve images offer the potential for developing automatic and objective methods for diagnosing glaucoma. We present a hybrid approach that features the analysis of CSLT images using moment methods to derive abstract image defining features. The features are then used to train classifers for automatically distinguishing CSLT images of normal and glaucoma patient. As a first, in this paper, we present investigations in feature subset selction methods for reducing the relatively large input space produced by the moment methods. We use neural networks and support vector machines to determine a sub-set of moments that offer high classification accuracy. We demonstratee the efficacy of our methods to discriminate between healthy and glaucomatous optic disks based on shape information automatically derived from optic disk topography and reflectance images.

  17. Using input feature information to improve ultraviolet retrieval in neural networks

    NASA Astrophysics Data System (ADS)

    Sun, Zhibin; Chang, Ni-Bin; Gao, Wei; Chen, Maosi; Zempila, Melina

    2017-09-01

    In neural networks, the training/predicting accuracy and algorithm efficiency can be improved significantly via accurate input feature extraction. In this study, some spatial features of several important factors in retrieving surface ultraviolet (UV) are extracted. An extreme learning machine (ELM) is used to retrieve the surface UV of 2014 in the continental United States, using the extracted features. The results conclude that more input weights can improve the learning capacities of neural networks.

  18. New method for rekindling the nonlinear solitary waves in Maxwellian complex space plasma

    NASA Astrophysics Data System (ADS)

    Das, G. C.; Sarma, Ridip

    2018-04-01

    Our interest is to study the nonlinear wave phenomena in complex plasma constituents with Maxwellian electrons and ions. The main reason for this consideration is to exhibit the effects of dust charge fluctuations on acoustic modes evaluated by the use of a new method. A special method (G'/G) has been developed to yield the coherent features of nonlinear waves augmented through the derivation of a Korteweg-de Vries equation and found successfully the different nature of solitons recognized in space plasmas. Evolutions have shown with the input of appropriate typical plasma parameters to support our theoretical observations in space plasmas. All conclusions are in good accordance with the actual occurrences and could be of interest to further the investigations in experiments and satellite observations in space. In this paper, we present not only the model that exhibited nonlinear solitary wave propagation but also a new mathematical method to the execution.

  19. High performance flight computer developed for deep space applications

    NASA Technical Reports Server (NTRS)

    Bunker, Robert L.

    1993-01-01

    The development of an advanced space flight computer for real time embedded deep space applications which embodies the lessons learned on Galileo and modern computer technology is described. The requirements are listed and the design implementation that meets those requirements is described. The development of SPACE-16 (Spaceborne Advanced Computing Engine) (where 16 designates the databus width) was initiated to support the MM2 (Marine Mark 2) project. The computer is based on a radiation hardened emulation of a modern 32 bit microprocessor and its family of support devices including a high performance floating point accelerator. Additional custom devices which include a coprocessor to improve input/output capabilities, a memory interface chip, and an additional support chip that provide management of all fault tolerant features, are described. Detailed supporting analyses and rationale which justifies specific design and architectural decisions are provided. The six chip types were designed and fabricated. Testing and evaluation of a brass/board was initiated.

  20. Electrometer Amplifier With Overload Protection

    NASA Technical Reports Server (NTRS)

    Woeller, F. H.; Alexander, R.

    1986-01-01

    Circuit features low noise, input offset, and high linearity. Input preamplifier includes input-overload protection and nulling circuit to subtract dc offset from output. Prototype dc amplifier designed for use with ion detector has features desirable in general laboratory and field instrumentation.

  1. Feature determination from powered wheelchair user joystick input characteristics for adapting driving assistance.

    PubMed

    Gillham, Michael; Pepper, Matthew; Kelly, Steve; Howells, Gareth

    2017-01-01

    Background : Many powered wheelchair users find their medical condition and their ability to drive the wheelchair will change over time. In order to maintain their independent mobility, the powered chair will require adjustment over time to suit the user's needs, thus regular input from healthcare professionals is required. These limited resources can result in the user having to wait weeks for appointments, resulting in the user losing independent mobility, consequently affecting their quality of life and that of their family and carers. In order to provide an adaptive assistive driving system, a range of features need to be identified which are suitable for initial system setup and can automatically provide data for re-calibration over the long term. Methods : A questionnaire was designed to collect information from powered wheelchair users with regard to their symptoms and how they changed over time. Another group of volunteer participants were asked to drive a test platform and complete a course which represented manoeuvring in a very confined space as quickly as possible. Two of those participants were also monitored over a longer period in their normal home daily environment. Features, thought to be suitable, were examined using pattern recognition classifiers to determine their suitability for identifying the changing user input over time. Results : The results are not designed to provide absolute insight into the individual user behaviour, as no ground truth of their ability has been determined, they do nevertheless demonstrate the utility of the measured features to provide evidence of the users' changing ability over time whilst driving a powered wheelchair. Conclusions : Determining the driving features and adjustable elements provides the initial step towards developing an adaptable assistive technology for the user when the ground truths of the individual and their machine have been learned by a smart pattern recognition system.

  2. Feature determination from powered wheelchair user joystick input characteristics for adapting driving assistance

    PubMed Central

    Gillham, Michael; Pepper, Matthew; Kelly, Steve; Howells, Gareth

    2018-01-01

    Background: Many powered wheelchair users find their medical condition and their ability to drive the wheelchair will change over time. In order to maintain their independent mobility, the powered chair will require adjustment over time to suit the user's needs, thus regular input from healthcare professionals is required. These limited resources can result in the user having to wait weeks for appointments, resulting in the user losing independent mobility, consequently affecting their quality of life and that of their family and carers. In order to provide an adaptive assistive driving system, a range of features need to be identified which are suitable for initial system setup and can automatically provide data for re-calibration over the long term. Methods: A questionnaire was designed to collect information from powered wheelchair users with regard to their symptoms and how they changed over time. Another group of volunteer participants were asked to drive a test platform and complete a course which represented manoeuvring in a very confined space as quickly as possible. Two of those participants were also monitored over a longer period in their normal home daily environment. Features, thought to be suitable, were examined using pattern recognition classifiers to determine their suitability for identifying the changing user input over time. Results: The results are not designed to provide absolute insight into the individual user behaviour, as no ground truth of their ability has been determined, they do nevertheless demonstrate the utility of the measured features to provide evidence of the users’ changing ability over time whilst driving a powered wheelchair. Conclusions: Determining the driving features and adjustable elements provides the initial step towards developing an adaptable assistive technology for the user when the ground truths of the individual and their machine have been learned by a smart pattern recognition system. PMID:29552641

  3. Digital Beamforming Synthetic Aperture Radar Developments at NASA Goddard Space Flight Center

    NASA Technical Reports Server (NTRS)

    Rincon, Rafael; Fatoyinbo, Temilola; Osmanoglu, Batuhan; Lee, Seung Kuk; Du Toit, Cornelis F.; Perrine, Martin; Ranson, K. Jon; Sun, Guoqing; Deshpande, Manohar; Beck, Jaclyn; hide

    2016-01-01

    Advanced Digital Beamforming (DBF) Synthetic Aperture Radar (SAR) technology is an area of research and development pursued at the NASA Goddard Space Flight Center (GSFC). Advanced SAR architectures enhances radar performance and opens a new set of capabilities in radar remote sensing. DBSAR-2 and EcoSAR are two state-of-the-art radar systems recently developed and tested. These new instruments employ multiple input-multiple output (MIMO) architectures characterized by multi-mode operation, software defined waveform generation, digital beamforming, and configurable radar parameters. The instruments have been developed to support several disciplines in Earth and Planetary sciences. This paper describes the radars advanced features and report on the latest SAR processing and calibration efforts.

  4. Feature extraction with deep neural networks by a generalized discriminant analysis.

    PubMed

    Stuhlsatz, André; Lippel, Jens; Zielke, Thomas

    2012-04-01

    We present an approach to feature extraction that is a generalization of the classical linear discriminant analysis (LDA) on the basis of deep neural networks (DNNs). As for LDA, discriminative features generated from independent Gaussian class conditionals are assumed. This modeling has the advantages that the intrinsic dimensionality of the feature space is bounded by the number of classes and that the optimal discriminant function is linear. Unfortunately, linear transformations are insufficient to extract optimal discriminative features from arbitrarily distributed raw measurements. The generalized discriminant analysis (GerDA) proposed in this paper uses nonlinear transformations that are learnt by DNNs in a semisupervised fashion. We show that the feature extraction based on our approach displays excellent performance on real-world recognition and detection tasks, such as handwritten digit recognition and face detection. In a series of experiments, we evaluate GerDA features with respect to dimensionality reduction, visualization, classification, and detection. Moreover, we show that GerDA DNNs can preprocess truly high-dimensional input data to low-dimensional representations that facilitate accurate predictions even if simple linear predictors or measures of similarity are used.

  5. An adaptive Cartesian control scheme for manipulators

    NASA Technical Reports Server (NTRS)

    Seraji, H.

    1987-01-01

    A adaptive control scheme for direct control of manipulator end-effectors to achieve trajectory tracking in Cartesian space is developed. The control structure is obtained from linear multivariable theory and is composed of simple feedforward and feedback controllers and an auxiliary input. The direct adaptation laws are derived from model reference adaptive control theory and are not based on parameter estimation of the robot model. The utilization of feedforward control and the inclusion of auxiliary input are novel features of the present scheme and result in improved dynamic performance over existing adaptive control schemes. The adaptive controller does not require the complex mathematical model of the robot dynamics or any knowledge of the robot parameters or the payload, and is computationally fast for online implementation with high sampling rates.

  6. Online dimensionality reduction using competitive learning and Radial Basis Function network.

    PubMed

    Tomenko, Vladimir

    2011-06-01

    The general purpose dimensionality reduction method should preserve data interrelations at all scales. Additional desired features include online projection of new data, processing nonlinearly embedded manifolds and large amounts of data. The proposed method, called RBF-NDR, combines these features. RBF-NDR is comprised of two modules. The first module learns manifolds by utilizing modified topology representing networks and geodesic distance in data space and approximates sampled or streaming data with a finite set of reference patterns, thus achieving scalability. Using input from the first module, the dimensionality reduction module constructs mappings between observation and target spaces. Introduction of specific loss function and synthesis of the training algorithm for Radial Basis Function network results in global preservation of data structures and online processing of new patterns. The RBF-NDR was applied for feature extraction and visualization and compared with Principal Component Analysis (PCA), neural network for Sammon's projection (SAMANN) and Isomap. With respect to feature extraction, the method outperformed PCA and yielded increased performance of the model describing wastewater treatment process. As for visualization, RBF-NDR produced superior results compared to PCA and SAMANN and matched Isomap. For the Topic Detection and Tracking corpus, the method successfully separated semantically different topics. Copyright © 2011 Elsevier Ltd. All rights reserved.

  7. A Full Dynamic Compound Inverse Method for output-only element-level system identification and input estimation from earthquake response signals

    NASA Astrophysics Data System (ADS)

    Pioldi, Fabio; Rizzi, Egidio

    2016-08-01

    This paper proposes a new output-only element-level system identification and input estimation technique, towards the simultaneous identification of modal parameters, input excitation time history and structural features at the element-level by adopting earthquake-induced structural response signals. The method, named Full Dynamic Compound Inverse Method (FDCIM), releases strong assumptions of earlier element-level techniques, by working with a two-stage iterative algorithm. Jointly, a Statistical Average technique, a modification process and a parameter projection strategy are adopted at each stage to achieve stronger convergence for the identified estimates. The proposed method works in a deterministic way and is completely developed in State-Space form. Further, it does not require continuous- to discrete-time transformations and does not depend on initialization conditions. Synthetic earthquake-induced response signals from different shear-type buildings are generated to validate the implemented procedure, also with noise-corrupted cases. The achieved results provide a necessary condition to demonstrate the effectiveness of the proposed identification method.

  8. Learning receptive fields using predictive feedback.

    PubMed

    Jehee, Janneke F M; Rothkopf, Constantin; Beck, Jeffrey M; Ballard, Dana H

    2006-01-01

    Previously, it was suggested that feedback connections from higher- to lower-level areas carry predictions of lower-level neural activities, whereas feedforward connections carry the residual error between the predictions and the actual lower-level activities [Rao, R.P.N., Ballard, D.H., 1999. Nature Neuroscience 2, 79-87.]. A computational model implementing the hypothesis learned simple cell receptive fields when exposed to natural images. Here, we use predictive feedback to explain tuning properties in medial superior temporal area (MST). We implement the hypothesis using a new, biologically plausible, algorithm based on matching pursuit, which retains all the features of the previous implementation, including its ability to efficiently encode input. When presented with natural images, the model developed receptive field properties as found in primary visual cortex. In addition, when exposed to visual motion input resulting from movements through space, the model learned receptive field properties resembling those in MST. These results corroborate the idea that predictive feedback is a general principle used by the visual system to efficiently encode natural input.

  9. CLASSIFYING MEDICAL IMAGES USING MORPHOLOGICAL APPEARANCE MANIFOLDS.

    PubMed

    Varol, Erdem; Gaonkar, Bilwaj; Davatzikos, Christos

    2013-12-31

    Input features for medical image classification algorithms are extracted from raw images using a series of pre processing steps. One common preprocessing step in computational neuroanatomy and functional brain mapping is the nonlinear registration of raw images to a common template space. Typically, the registration methods used are parametric and their output varies greatly with changes in parameters. Most results reported previously perform registration using a fixed parameter setting and use the results as input to the subsequent classification step. The variation in registration results due to choice of parameters thus translates to variation of performance of the classifiers that depend on the registration step for input. Analogous issues have been investigated in the computer vision literature, where image appearance varies with pose and illumination, thereby making classification vulnerable to these confounding parameters. The proposed methodology addresses this issue by sampling image appearances as registration parameters vary, and shows that better classification accuracies can be obtained this way, compared to the conventional approach.

  10. Automated placement of interfaces in conformational kinetics calculations using machine learning

    NASA Astrophysics Data System (ADS)

    Grazioli, Gianmarc; Butts, Carter T.; Andricioaei, Ioan

    2017-10-01

    Several recent implementations of algorithms for sampling reaction pathways employ a strategy for placing interfaces or milestones across the reaction coordinate manifold. Interfaces can be introduced such that the full feature space describing the dynamics of a macromolecule is divided into Voronoi (or other) cells, and the global kinetics of the molecular motions can be calculated from the set of fluxes through the interfaces between the cells. Although some methods of this type are exact for an arbitrary set of cells, in practice, the calculations will converge fastest when the interfaces are placed in regions where they can best capture transitions between configurations corresponding to local minima. The aim of this paper is to introduce a fully automated machine-learning algorithm for defining a set of cells for use in kinetic sampling methodologies based on subdividing the dynamical feature space; the algorithm requires no intuition about the system or input from the user and scales to high-dimensional systems.

  11. Automated placement of interfaces in conformational kinetics calculations using machine learning.

    PubMed

    Grazioli, Gianmarc; Butts, Carter T; Andricioaei, Ioan

    2017-10-21

    Several recent implementations of algorithms for sampling reaction pathways employ a strategy for placing interfaces or milestones across the reaction coordinate manifold. Interfaces can be introduced such that the full feature space describing the dynamics of a macromolecule is divided into Voronoi (or other) cells, and the global kinetics of the molecular motions can be calculated from the set of fluxes through the interfaces between the cells. Although some methods of this type are exact for an arbitrary set of cells, in practice, the calculations will converge fastest when the interfaces are placed in regions where they can best capture transitions between configurations corresponding to local minima. The aim of this paper is to introduce a fully automated machine-learning algorithm for defining a set of cells for use in kinetic sampling methodologies based on subdividing the dynamical feature space; the algorithm requires no intuition about the system or input from the user and scales to high-dimensional systems.

  12. a Comparison Study of Different Kernel Functions for Svm-Based Classification of Multi-Temporal Polarimetry SAR Data

    NASA Astrophysics Data System (ADS)

    Yekkehkhany, B.; Safari, A.; Homayouni, S.; Hasanlou, M.

    2014-10-01

    In this paper, a framework is developed based on Support Vector Machines (SVM) for crop classification using polarimetric features extracted from multi-temporal Synthetic Aperture Radar (SAR) imageries. The multi-temporal integration of data not only improves the overall retrieval accuracy but also provides more reliable estimates with respect to single-date data. Several kernel functions are employed and compared in this study for mapping the input space to higher Hilbert dimension space. These kernel functions include linear, polynomials and Radial Based Function (RBF). The method is applied to several UAVSAR L-band SAR images acquired over an agricultural area near Winnipeg, Manitoba, Canada. In this research, the temporal alpha features of H/A/α decomposition method are used in classification. The experimental tests show an SVM classifier with RBF kernel for three dates of data increases the Overall Accuracy (OA) to up to 3% in comparison to using linear kernel function, and up to 1% in comparison to a 3rd degree polynomial kernel function.

  13. Image processing tool for automatic feature recognition and quantification

    DOEpatents

    Chen, Xing; Stoddard, Ryan J.

    2017-05-02

    A system for defining structures within an image is described. The system includes reading of an input file, preprocessing the input file while preserving metadata such as scale information and then detecting features of the input file. In one version the detection first uses an edge detector followed by identification of features using a Hough transform. The output of the process is identified elements within the image.

  14. Exploring nonlinear feature space dimension reduction and data representation in breast CADx with Laplacian eigenmaps and t-SNE

    PubMed Central

    Jamieson, Andrew R.; Giger, Maryellen L.; Drukker, Karen; Li, Hui; Yuan, Yading; Bhooshan, Neha

    2010-01-01

    Purpose: In this preliminary study, recently developed unsupervised nonlinear dimension reduction (DR) and data representation techniques were applied to computer-extracted breast lesion feature spaces across three separate imaging modalities: Ultrasound (U.S.) with 1126 cases, dynamic contrast enhanced magnetic resonance imaging with 356 cases, and full-field digital mammography with 245 cases. Two methods for nonlinear DR were explored: Laplacian eigenmaps [M. Belkin and P. Niyogi, “Laplacian eigenmaps for dimensionality reduction and data representation,” Neural Comput. 15, 1373–1396 (2003)] and t-distributed stochastic neighbor embedding (t-SNE) [L. van der Maaten and G. Hinton, “Visualizing data using t-SNE,” J. Mach. Learn. Res. 9, 2579–2605 (2008)]. Methods: These methods attempt to map originally high dimensional feature spaces to more human interpretable lower dimensional spaces while preserving both local and global information. The properties of these methods as applied to breast computer-aided diagnosis (CADx) were evaluated in the context of malignancy classification performance as well as in the visual inspection of the sparseness within the two-dimensional and three-dimensional mappings. Classification performance was estimated by using the reduced dimension mapped feature output as input into both linear and nonlinear classifiers: Markov chain Monte Carlo based Bayesian artificial neural network (MCMC-BANN) and linear discriminant analysis. The new techniques were compared to previously developed breast CADx methodologies, including automatic relevance determination and linear stepwise (LSW) feature selection, as well as a linear DR method based on principal component analysis. Using ROC analysis and 0.632+bootstrap validation, 95% empirical confidence intervals were computed for the each classifier’s AUC performance. Results: In the large U.S. data set, sample high performance results include, AUC0.632+=0.88 with 95% empirical bootstrap interval [0.787;0.895] for 13 ARD selected features and AUC0.632+=0.87 with interval [0.817;0.906] for four LSW selected features compared to 4D t-SNE mapping (from the original 81D feature space) giving AUC0.632+=0.90 with interval [0.847;0.919], all using the MCMC-BANN. Conclusions: Preliminary results appear to indicate capability for the new methods to match or exceed classification performance of current advanced breast lesion CADx algorithms. While not appropriate as a complete replacement of feature selection in CADx problems, DR techniques offer a complementary approach, which can aid elucidation of additional properties associated with the data. Specifically, the new techniques were shown to possess the added benefit of delivering sparse lower dimensional representations for visual interpretation, revealing intricate data structure of the feature space. PMID:20175497

  15. Joining sheet aluminum AA6061-T4 to cast magnesium AM60B by vaporizing foil actuator welding: Input energy, interface, and strength

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Bert; Vivek, Anupam; Daehn, Glenn S.

    Dissimilar joining of sheet aluminum AA6061-T4 to cast magnesium AM60B was achieved by vaporizing foil actuator welding (VFAW). Three input energy levels were used (6, 8, and 10 kJ), and as a trend, higher input energies resulted in progressively higher flyer velocities, more pronounced interfacial wavy features, larger weld zones, higher peel strengths, and higher peel energies. In all cases, weld cross section revealed a soundly bonded interface characterized by well-developed wavy features and lack of voids and continuous layers of intermetallic compounds (IMCs). At 10 kJ input energy, flyer speed of 820 m/s, peel strength of 22.4 N/mm, andmore » peel energy of 5.2 J were obtained. In lap-shear, failure occurred in AA6061- T4 flyer at 97% of the base material’s peak tensile load. Peel samples failed along the weld interface, and the AM60B-side of the fracture surface showed thin, evenly-spaced lines of Al residuals which had been torn out of the base AA6061-T4 in a ductile fashion and transferred over to the AM60B side, indicating very strong AA6061-T4/AM60B bond in these areas. Furthermore, this work demonstrates VFAW’s capability in joining dissimilar lightweight metals such as Al/Mg, which is expected to be a great enabler in the ongoing push for vehicle weight reduction.« less

  16. Joining sheet aluminum AA6061-T4 to cast magnesium AM60B by vaporizing foil actuator welding: Input energy, interface, and strength

    DOE PAGES

    Liu, Bert; Vivek, Anupam; Daehn, Glenn S.

    2017-09-19

    Dissimilar joining of sheet aluminum AA6061-T4 to cast magnesium AM60B was achieved by vaporizing foil actuator welding (VFAW). Three input energy levels were used (6, 8, and 10 kJ), and as a trend, higher input energies resulted in progressively higher flyer velocities, more pronounced interfacial wavy features, larger weld zones, higher peel strengths, and higher peel energies. In all cases, weld cross section revealed a soundly bonded interface characterized by well-developed wavy features and lack of voids and continuous layers of intermetallic compounds (IMCs). At 10 kJ input energy, flyer speed of 820 m/s, peel strength of 22.4 N/mm, andmore » peel energy of 5.2 J were obtained. In lap-shear, failure occurred in AA6061- T4 flyer at 97% of the base material’s peak tensile load. Peel samples failed along the weld interface, and the AM60B-side of the fracture surface showed thin, evenly-spaced lines of Al residuals which had been torn out of the base AA6061-T4 in a ductile fashion and transferred over to the AM60B side, indicating very strong AA6061-T4/AM60B bond in these areas. Furthermore, this work demonstrates VFAW’s capability in joining dissimilar lightweight metals such as Al/Mg, which is expected to be a great enabler in the ongoing push for vehicle weight reduction.« less

  17. Summary of the key features of seven biomathematical models of human fatigue and performance.

    PubMed

    Mallis, Melissa M; Mejdal, Sig; Nguyen, Tammy T; Dinges, David F

    2004-03-01

    Biomathematical models that quantify the effects of circadian and sleep/wake processes on the regulation of alertness and performance have been developed in an effort to predict the magnitude and timing of fatigue-related responses in a variety of contexts (e.g., transmeridian travel, sustained operations, shift work). This paper summarizes key features of seven biomathematical models reviewed as part of the Fatigue and Performance Modeling Workshop held in Seattle, WA, on June 13-14, 2002. The Workshop was jointly sponsored by the National Aeronautics and Space Administration, U.S. Department of Defense, U.S. Army Medical Research and Materiel Command, Office of Naval Research, Air Force Office of Scientific Research, and U.S. Department of Transportation. An invitation was sent to developers of seven biomathematical models that were commonly cited in scientific literature and/or supported by government funding. On acceptance of the invitation to attend the Workshop, developers were asked to complete a survey of the goals, capabilities, inputs, and outputs of their biomathematical models of alertness and performance. Data from the completed surveys were summarized and juxtaposed to provide a framework for comparing features of the seven models. Survey responses revealed that models varied greatly relative to their reported goals and capabilities. While all modelers reported that circadian factors were key components of their capabilities, they differed markedly with regard to the roles of sleep and work times as input factors for prediction: four of the seven models had work time as their sole input variable(s), while the other three models relied on various aspects of sleep timing for model input. Models also differed relative to outputs: five sought to predict results from laboratory experiments, field, and operational data, while two models were developed without regard to predicting laboratory experimental results. All modelers provided published papers describing their models, with three of the models being proprietary. Although all models appear to have been fundamentally influenced by the two-process model of sleep regulation by Borbély, there is considerable diversity among them in the number and type of input and output variables, and their stated goals and capabilities.

  18. Summary of the key features of seven biomathematical models of human fatigue and performance

    NASA Technical Reports Server (NTRS)

    Mallis, Melissa M.; Mejdal, Sig; Nguyen, Tammy T.; Dinges, David F.

    2004-01-01

    BACKGROUND: Biomathematical models that quantify the effects of circadian and sleep/wake processes on the regulation of alertness and performance have been developed in an effort to predict the magnitude and timing of fatigue-related responses in a variety of contexts (e.g., transmeridian travel, sustained operations, shift work). This paper summarizes key features of seven biomathematical models reviewed as part of the Fatigue and Performance Modeling Workshop held in Seattle, WA, on June 13-14, 2002. The Workshop was jointly sponsored by the National Aeronautics and Space Administration, U.S. Department of Defense, U.S. Army Medical Research and Materiel Command, Office of Naval Research, Air Force Office of Scientific Research, and U.S. Department of Transportation. METHODS: An invitation was sent to developers of seven biomathematical models that were commonly cited in scientific literature and/or supported by government funding. On acceptance of the invitation to attend the Workshop, developers were asked to complete a survey of the goals, capabilities, inputs, and outputs of their biomathematical models of alertness and performance. Data from the completed surveys were summarized and juxtaposed to provide a framework for comparing features of the seven models. RESULTS: Survey responses revealed that models varied greatly relative to their reported goals and capabilities. While all modelers reported that circadian factors were key components of their capabilities, they differed markedly with regard to the roles of sleep and work times as input factors for prediction: four of the seven models had work time as their sole input variable(s), while the other three models relied on various aspects of sleep timing for model input. Models also differed relative to outputs: five sought to predict results from laboratory experiments, field, and operational data, while two models were developed without regard to predicting laboratory experimental results. All modelers provided published papers describing their models, with three of the models being proprietary. CONCLUSIONS: Although all models appear to have been fundamentally influenced by the two-process model of sleep regulation by Borbely, there is considerable diversity among them in the number and type of input and output variables, and their stated goals and capabilities.

  19. Numerical approach on dynamic self-assembly of colloidal particles

    NASA Astrophysics Data System (ADS)

    Ibrahimi, Muhamet; Ilday, Serim; Makey, Ghaith; Pavlov, Ihor; Yavuz, Özgàn; Gulseren, Oguz; Ilday, Fatih Omer

    Far from equilibrium systems of artificial ensembles are crucial for understanding many intelligent features in self-organized natural systems. However, the lack of established theory underlies a need for numerical implementations. Inspired by a novel work, we simulate a solution-suspended colloidal system that dynamically self assembles due to convective forces generated in the solvent when heated by a laser. In order to incorporate with random fluctuations of particles and continuously changing flow, we exploit a random-walk based Brownian motion model and a fluid dynamics solver prepared for games, respectively. Simulation results manage to fit to experiments and show many quantitative features of a non equilibrium dynamic self assembly, including phase space compression and an ensemble-energy input feedback loop.

  20. Self-organizing neural networks--an alternative way of cluster analysis in clinical chemistry.

    PubMed

    Reibnegger, G; Wachter, H

    1996-04-15

    Supervised learning schemes have been employed by several workers for training neural networks designed to solve clinical problems. We demonstrate that unsupervised techniques can also produce interesting and meaningful results. Using a data set on the chemical composition of milk from 22 different mammals, we demonstrate that self-organizing feature maps (Kohonen networks) as well as a modified version of error backpropagation technique yield results mimicking conventional cluster analysis. Both techniques are able to project a potentially multi-dimensional input vector onto a two-dimensional space whereby neighborhood relationships remain conserved. Thus, these techniques can be used for reducing dimensionality of complicated data sets and for enhancing comprehensibility of features hidden in the data matrix.

  1. Classification and Verification of Handwritten Signatures with Time Causal Information Theory Quantifiers.

    PubMed

    Rosso, Osvaldo A; Ospina, Raydonal; Frery, Alejandro C

    2016-01-01

    We present a new approach for handwritten signature classification and verification based on descriptors stemming from time causal information theory. The proposal uses the Shannon entropy, the statistical complexity, and the Fisher information evaluated over the Bandt and Pompe symbolization of the horizontal and vertical coordinates of signatures. These six features are easy and fast to compute, and they are the input to an One-Class Support Vector Machine classifier. The results are better than state-of-the-art online techniques that employ higher-dimensional feature spaces which often require specialized software and hardware. We assess the consistency of our proposal with respect to the size of the training sample, and we also use it to classify the signatures into meaningful groups.

  2. Monitoring the Depth of Anesthesia Using a New Adaptive Neurofuzzy System.

    PubMed

    Shalbaf, Ahmad; Saffar, Mohsen; Sleigh, Jamie W; Shalbaf, Reza

    2018-05-01

    Accurate and noninvasive monitoring of the depth of anesthesia (DoA) is highly desirable. Since the anesthetic drugs act mainly on the central nervous system, the analysis of brain activity using electroencephalogram (EEG) is very useful. This paper proposes a novel automated method for assessing the DoA using EEG. First, 11 features including spectral, fractal, and entropy are extracted from EEG signal and then, by applying an algorithm according to exhaustive search of all subsets of features, a combination of the best features (Beta-index, sample entropy, shannon permutation entropy, and detrended fluctuation analysis) is selected. Accordingly, we feed these extracted features to a new neurofuzzy classification algorithm, adaptive neurofuzzy inference system with linguistic hedges (ANFIS-LH). This structure can successfully model systems with nonlinear relationships between input and output, and also classify overlapped classes accurately. ANFIS-LH, which is based on modified classical fuzzy rules, reduces the effects of the insignificant features in input space, which causes overlapping and modifies the output layer structure. The presented method classifies EEG data into awake, light, general, and deep states during anesthesia with sevoflurane in 17 patients. Its accuracy is 92% compared to a commercial monitoring system (response entropy index) successfully. Moreover, this method reaches the classification accuracy of 93% to categorize EEG signal to awake and general anesthesia states by another database of propofol and volatile anesthesia in 50 patients. To sum up, this method is potentially applicable to a new real-time monitoring system to help the anesthesiologist with continuous assessment of DoA quickly and accurately.

  3. Focused conformational sampling in proteins

    NASA Astrophysics Data System (ADS)

    Bacci, Marco; Langini, Cassiano; Vymětal, Jiří; Caflisch, Amedeo; Vitalis, Andreas

    2017-11-01

    A detailed understanding of the conformational dynamics of biological molecules is difficult to obtain by experimental techniques due to resolution limitations in both time and space. Computer simulations avoid these in theory but are often too short to sample rare events reliably. Here we show that the progress index-guided sampling (PIGS) protocol can be used to enhance the sampling of rare events in selected parts of biomolecules without perturbing the remainder of the system. The method is very easy to use as it only requires as essential input a set of several features representing the parts of interest sufficiently. In this feature space, new states are discovered by spontaneous fluctuations alone and in unsupervised fashion. Because there are no energetic biases acting on phase space variables or projections thereof, the trajectories PIGS generates can be analyzed directly in the framework of transition networks. We demonstrate the possibility and usefulness of such focused explorations of biomolecules with two loops that are part of the binding sites of bromodomains, a family of epigenetic "reader" modules. This real-life application uncovers states that are structurally and kinetically far away from the initial crystallographic structures and are also metastable. Representative conformations are intended to be used in future high-throughput virtual screening campaigns.

  4. Learning representative features for facial images based on a modified principal component analysis

    NASA Astrophysics Data System (ADS)

    Averkin, Anton; Potapov, Alexey

    2013-05-01

    The paper is devoted to facial image analysis and particularly deals with the problem of automatic evaluation of the attractiveness of human faces. We propose a new approach for automatic construction of feature space based on a modified principal component analysis. Input data sets for the algorithm are the learning data sets of facial images, which are rated by one person. The proposed approach allows one to extract features of the individual subjective face beauty perception and to predict attractiveness values for new facial images, which were not included into a learning data set. The Pearson correlation coefficient between values predicted by our method for new facial images and personal attractiveness estimation values equals to 0.89. This means that the new approach proposed is promising and can be used for predicting subjective face attractiveness values in real systems of the facial images analysis.

  5. Access to Space Interactive Design Web Site

    NASA Technical Reports Server (NTRS)

    Leon, John; Cutlip, William; Hametz, Mark

    2000-01-01

    The Access To Space (ATS) Group at NASA's Goddard Space Flight Center (GSFC) supports the science and technology community at GSFC by facilitating frequent and affordable opportunities for access to space. Through partnerships established with access mode suppliers, the ATS Group has developed an interactive Mission Design web site. The ATS web site provides both the information and the tools necessary to assist mission planners in selecting and planning their ride to space. This includes the evaluation of single payloads vs. ride-sharing opportunities to reduce the cost of access to space. Features of this site include the following: (1) Mission Database. Our mission database contains a listing of missions ranging from proposed missions to manifested. Missions can be entered by our user community through data input tools. Data is then accessed by users through various search engines: orbit parameters, ride-share opportunities, spacecraft parameters, other mission notes, launch vehicle, and contact information. (2) Launch Vehicle Toolboxes. The launch vehicle toolboxes provide the user a full range of information on vehicle classes and individual configurations. Topics include: general information, environments, performance, payload interface, available volume, and launch sites.

  6. Uncovering Spatial Variation in Acoustic Environments Using Sound Mapping.

    PubMed

    Job, Jacob R; Myers, Kyle; Naghshineh, Koorosh; Gill, Sharon A

    2016-01-01

    Animals select and use habitats based on environmental features relevant to their ecology and behavior. For animals that use acoustic communication, the sound environment itself may be a critical feature, yet acoustic characteristics are not commonly measured when describing habitats and as a result, how habitats vary acoustically over space and time is poorly known. Such considerations are timely, given worldwide increases in anthropogenic noise combined with rapidly accumulating evidence that noise hampers the ability of animals to detect and interpret natural sounds. Here, we used microphone arrays to record the sound environment in three terrestrial habitats (forest, prairie, and urban) under ambient conditions and during experimental noise introductions. We mapped sound pressure levels (SPLs) over spatial scales relevant to diverse taxa to explore spatial variation in acoustic habitats and to evaluate the number of microphones needed within arrays to capture this variation under both ambient and noisy conditions. Even at small spatial scales and over relatively short time spans, SPLs varied considerably, especially in forest and urban habitats, suggesting that quantifying and mapping acoustic features could improve habitat descriptions. Subset maps based on input from 4, 8, 12 and 16 microphones differed slightly (< 2 dBA/pixel) from those based on full arrays of 24 microphones under ambient conditions across habitats. Map differences were more pronounced with noise introductions, particularly in forests; maps made from only 4-microphones differed more (> 4 dBA/pixel) from full maps than the remaining subset maps, but maps with input from eight microphones resulted in smaller differences. Thus, acoustic environments varied over small spatial scales and variation could be mapped with input from 4-8 microphones. Mapping sound in different environments will improve understanding of acoustic environments and allow us to explore the influence of spatial variation in sound on animal ecology and behavior.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Laros, James H.; Grant, Ryan; Levenhagen, Michael J.

    Measuring and controlling the power and energy consumption of high performance computing systems by various components in the software stack is an active research area. Implementations in lower level software layers are beginning to emerge in some production systems, which is very welcome. To be most effective, a portable interface to measurement and control features would significantly facilitate participation by all levels of the software stack. We present a proposal for a standard power Application Programming Interface (API) that endeavors to cover the entire software space, from generic hardware interfaces to the input from the computer facility manager.

  8. A latent low-dimensional common input drives a pool of motor neurons: a probabilistic latent state-space model.

    PubMed

    Feeney, Daniel F; Meyer, François G; Noone, Nicholas; Enoka, Roger M

    2017-10-01

    Motor neurons appear to be activated with a common input signal that modulates the discharge activity of all neurons in the motor nucleus. It has proven difficult for neurophysiologists to quantify the variability in a common input signal, but characterization of such a signal may improve our understanding of how the activation signal varies across motor tasks. Contemporary methods of quantifying the common input to motor neurons rely on compiling discrete action potentials into continuous time series, assuming the motor pool acts as a linear filter, and requiring signals to be of sufficient duration for frequency analysis. We introduce a space-state model in which the discharge activity of motor neurons is modeled as inhomogeneous Poisson processes and propose a method to quantify an abstract latent trajectory that represents the common input received by motor neurons. The approach also approximates the variation in synaptic noise in the common input signal. The model is validated with four data sets: a simulation of 120 motor units, a pair of integrate-and-fire neurons with a Renshaw cell providing inhibitory feedback, the discharge activity of 10 integrate-and-fire neurons, and the discharge times of concurrently active motor units during an isometric voluntary contraction. The simulations revealed that a latent state-space model is able to quantify the trajectory and variability of the common input signal across all four conditions. When compared with the cumulative spike train method of characterizing common input, the state-space approach was more sensitive to the details of the common input current and was less influenced by the duration of the signal. The state-space approach appears to be capable of detecting rather modest changes in common input signals across conditions. NEW & NOTEWORTHY We propose a state-space model that explicitly delineates a common input signal sent to motor neurons and the physiological noise inherent in synaptic signal transmission. This is the first application of a deterministic state-space model to represent the discharge characteristics of motor units during voluntary contractions. Copyright © 2017 the American Physiological Society.

  9. Quantum walks and wavepacket dynamics on a lattice with twisted photons.

    PubMed

    Cardano, Filippo; Massa, Francesco; Qassim, Hammam; Karimi, Ebrahim; Slussarenko, Sergei; Paparo, Domenico; de Lisio, Corrado; Sciarrino, Fabio; Santamato, Enrico; Boyd, Robert W; Marrucci, Lorenzo

    2015-03-01

    The "quantum walk" has emerged recently as a paradigmatic process for the dynamic simulation of complex quantum systems, entanglement production and quantum computation. Hitherto, photonic implementations of quantum walks have mainly been based on multipath interferometric schemes in real space. We report the experimental realization of a discrete quantum walk taking place in the orbital angular momentum space of light, both for a single photon and for two simultaneous photons. In contrast to previous implementations, the whole process develops in a single light beam, with no need of interferometers; it requires optical resources scaling linearly with the number of steps; and it allows flexible control of input and output superposition states. Exploiting the latter property, we explored the system band structure in momentum space and the associated spin-orbit topological features by simulating the quantum dynamics of Gaussian wavepackets. Our demonstration introduces a novel versatile photonic platform for quantum simulations.

  10. Quantum walks and wavepacket dynamics on a lattice with twisted photons

    PubMed Central

    Cardano, Filippo; Massa, Francesco; Qassim, Hammam; Karimi, Ebrahim; Slussarenko, Sergei; Paparo, Domenico; de Lisio, Corrado; Sciarrino, Fabio; Santamato, Enrico; Boyd, Robert W.; Marrucci, Lorenzo

    2015-01-01

    The “quantum walk” has emerged recently as a paradigmatic process for the dynamic simulation of complex quantum systems, entanglement production and quantum computation. Hitherto, photonic implementations of quantum walks have mainly been based on multipath interferometric schemes in real space. We report the experimental realization of a discrete quantum walk taking place in the orbital angular momentum space of light, both for a single photon and for two simultaneous photons. In contrast to previous implementations, the whole process develops in a single light beam, with no need of interferometers; it requires optical resources scaling linearly with the number of steps; and it allows flexible control of input and output superposition states. Exploiting the latter property, we explored the system band structure in momentum space and the associated spin-orbit topological features by simulating the quantum dynamics of Gaussian wavepackets. Our demonstration introduces a novel versatile photonic platform for quantum simulations. PMID:26601157

  11. Modelling, analyses and design of switching converters

    NASA Technical Reports Server (NTRS)

    Cuk, S. M.; Middlebrook, R. D.

    1978-01-01

    A state-space averaging method for modelling switching dc-to-dc converters for both continuous and discontinuous conduction mode is developed. In each case the starting point is the unified state-space representation, and the end result is a complete linear circuit model, for each conduction mode, which correctly represents all essential features, namely, the input, output, and transfer properties (static dc as well as dynamic ac small-signal). While the method is generally applicable to any switching converter, it is extensively illustrated for the three common power stages (buck, boost, and buck-boost). The results for these converters are then easily tabulated owing to the fixed equivalent circuit topology of their canonical circuit model. The insights that emerge from the general state-space modelling approach lead to the design of new converter topologies through the study of generic properties of the cascade connection of basic buck and boost converters.

  12. Differences in Visual-Spatial Input May Underlie Different Compression Properties of Firing Fields for Grid Cell Modules in Medial Entorhinal Cortex

    PubMed Central

    Raudies, Florian; Hasselmo, Michael E.

    2015-01-01

    Firing fields of grid cells in medial entorhinal cortex show compression or expansion after manipulations of the location of environmental barriers. This compression or expansion could be selective for individual grid cell modules with particular properties of spatial scaling. We present a model for differences in the response of modules to barrier location that arise from different mechanisms for the influence of visual features on the computation of location that drives grid cell firing patterns. These differences could arise from differences in the position of visual features within the visual field. When location was computed from the movement of visual features on the ground plane (optic flow) in the ventral visual field, this resulted in grid cell spatial firing that was not sensitive to barrier location in modules modeled with small spacing between grid cell firing fields. In contrast, when location was computed from static visual features on walls of barriers, i.e. in the more dorsal visual field, this resulted in grid cell spatial firing that compressed or expanded based on the barrier locations in modules modeled with large spacing between grid cell firing fields. This indicates that different grid cell modules might have differential properties for computing location based on visual cues, or the spatial radius of sensitivity to visual cues might differ between modules. PMID:26584432

  13. Response monitoring using quantitative ultrasound methods and supervised dictionary learning in locally advanced breast cancer

    NASA Astrophysics Data System (ADS)

    Gangeh, Mehrdad J.; Fung, Brandon; Tadayyon, Hadi; Tran, William T.; Czarnota, Gregory J.

    2016-03-01

    A non-invasive computer-aided-theragnosis (CAT) system was developed for the early assessment of responses to neoadjuvant chemotherapy in patients with locally advanced breast cancer. The CAT system was based on quantitative ultrasound spectroscopy methods comprising several modules including feature extraction, a metric to measure the dissimilarity between "pre-" and "mid-treatment" scans, and a supervised learning algorithm for the classification of patients to responders/non-responders. One major requirement for the successful design of a high-performance CAT system is to accurately measure the changes in parametric maps before treatment onset and during the course of treatment. To this end, a unified framework based on Hilbert-Schmidt independence criterion (HSIC) was used for the design of feature extraction from parametric maps and the dissimilarity measure between the "pre-" and "mid-treatment" scans. For the feature extraction, HSIC was used to design a supervised dictionary learning (SDL) method by maximizing the dependency between the scans taken from "pre-" and "mid-treatment" with "dummy labels" given to the scans. For the dissimilarity measure, an HSIC-based metric was employed to effectively measure the changes in parametric maps as an indication of treatment effectiveness. The HSIC-based feature extraction and dissimilarity measure used a kernel function to nonlinearly transform input vectors into a higher dimensional feature space and computed the population means in the new space, where enhanced group separability was ideally obtained. The results of the classification using the developed CAT system indicated an improvement of performance compared to a CAT system with basic features using histogram of intensity.

  14. Assessing efficiency of spatial sampling using combined coverage analysis in geographical and feature spaces

    NASA Astrophysics Data System (ADS)

    Hengl, Tomislav

    2015-04-01

    Efficiency of spatial sampling largely determines success of model building. This is especially important for geostatistical mapping where an initial sampling plan should provide a good representation or coverage of both geographical (defined by the study area mask map) and feature space (defined by the multi-dimensional covariates). Otherwise the model will need to extrapolate and, hence, the overall uncertainty of the predictions will be high. In many cases, geostatisticians use point data sets which are produced using unknown or inconsistent sampling algorithms. Many point data sets in environmental sciences suffer from spatial clustering and systematic omission of feature space. But how to quantify these 'representation' problems and how to incorporate this knowledge into model building? The author has developed a generic function called 'spsample.prob' (Global Soil Information Facilities package for R) and which simultaneously determines (effective) inclusion probabilities as an average between the kernel density estimation (geographical spreading of points; analysed using the spatstat package in R) and MaxEnt analysis (feature space spreading of points; analysed using the MaxEnt software used primarily for species distribution modelling). The output 'iprob' map indicates whether the sampling plan has systematically missed some important locations and/or features, and can also be used as an input for geostatistical modelling e.g. as a weight map for geostatistical model fitting. The spsample.prob function can also be used in combination with the accessibility analysis (cost of field survey are usually function of distance from the road network, slope and land cover) to allow for simultaneous maximization of average inclusion probabilities and minimization of total survey costs. The author postulates that, by estimating effective inclusion probabilities using combined geographical and feature space analysis, and by comparing survey costs to representation efficiency, an optimal initial sampling plan can be produced which satisfies both criteria: (a) good representation (i.e. within a tolerance threshold), and (b) minimized survey costs. This sampling analysis framework could become especially interesting for generating sampling plans in new areas e.g. for which no previous spatial prediction model exists. The presentation includes data processing demos with standard soil sampling data sets Ebergotzen (Germany) and Edgeroi (Australia), also available via the GSIF package.

  15. Classification and Verification of Handwritten Signatures with Time Causal Information Theory Quantifiers

    PubMed Central

    Ospina, Raydonal; Frery, Alejandro C.

    2016-01-01

    We present a new approach for handwritten signature classification and verification based on descriptors stemming from time causal information theory. The proposal uses the Shannon entropy, the statistical complexity, and the Fisher information evaluated over the Bandt and Pompe symbolization of the horizontal and vertical coordinates of signatures. These six features are easy and fast to compute, and they are the input to an One-Class Support Vector Machine classifier. The results are better than state-of-the-art online techniques that employ higher-dimensional feature spaces which often require specialized software and hardware. We assess the consistency of our proposal with respect to the size of the training sample, and we also use it to classify the signatures into meaningful groups. PMID:27907014

  16. Three-dimensional perspective software for representation of digital imagery data. [Olympic National Park, Washington

    NASA Technical Reports Server (NTRS)

    Junkin, B. G.

    1980-01-01

    A generalized three dimensional perspective software capability was developed within the framework of a low cost computer oriented geographically based information system using the Earth Resources Laboratory Applications Software (ELAS) operating subsystem. This perspective software capability, developed primarily to support data display requirements at the NASA/NSTL Earth Resources Laboratory, provides a means of displaying three dimensional feature space object data in two dimensional picture plane coordinates and makes it possible to overlay different types of information on perspective drawings to better understand the relationship of physical features. An example topographic data base is constructed and is used as the basic input to the plotting module. Examples are shown which illustrate oblique viewing angles that convey spatial concepts and relationships represented by the topographic data planes.

  17. FormTracer. A mathematica tracing package using FORM

    NASA Astrophysics Data System (ADS)

    Cyrol, Anton K.; Mitter, Mario; Strodthoff, Nils

    2017-10-01

    We present FormTracer, a high-performance, general purpose, easy-to-use Mathematica tracing package which uses FORM. It supports arbitrary space and spinor dimensions as well as an arbitrary number of simple compact Lie groups. While keeping the usability of the Mathematica interface, it relies on the efficiency of FORM. An additional performance gain is achieved by a decomposition algorithm that avoids redundant traces in the product tensors spaces. FormTracer supports a wide range of syntaxes which endows it with a high flexibility. Mathematica notebooks that automatically install the package and guide the user through performing standard traces in space-time, spinor and gauge-group spaces are provided. Program Files doi:http://dx.doi.org/10.17632/7rd29h4p3m.1 Licensing provisions: GPLv3 Programming language: Mathematica and FORM Nature of problem: Efficiently compute traces of large expressions Solution method: The expression to be traced is decomposed into its subspaces by a recursive Mathematica expansion algorithm. The result is subsequently translated to a FORM script that takes the traces. After FORM is executed, the final result is either imported into Mathematica or exported as optimized C/C++/Fortran code. Unusual features: The outstanding features of FormTracer are the simple interface, the capability to efficiently handle an arbitrary number of Lie groups in addition to Dirac and Lorentz tensors, and a customizable input-syntax.

  18. A trainable decisions-in decision-out (DEI-DEO) fusion system

    NASA Astrophysics Data System (ADS)

    Dasarathy, Belur V.

    1998-03-01

    Most of the decision fusion systems proposed hitherto in the literature for multiple data source (sensor) environments operate on the basis of pre-defined fusion logic, be they crisp (deterministic), probabilistic, or fuzzy in nature, with no specific learning phase. The fusion systems that are trainable, i.e., ones that have a learning phase, mostly operate in the features-in-decision-out mode, which essentially reduces the fusion process functionally to a pattern classification task in the joint feature space. In this study, a trainable decisions-in-decision-out fusion system is described which estimates a fuzzy membership distribution spread across the different decision choices based on the performance of the different decision processors (sensors) corresponding to each training sample (object) which is associated with a specific ground truth (true decision). Based on a multi-decision space histogram analysis of the performance of the different processors over the entire training data set, a look-up table associating each cell of the histogram with a specific true decision is generated which forms the basis for the operational phase. In the operational phase, for each set of decision inputs, a pointer to the look-up table learnt previously is generated from which a fused decision is derived. This methodology, although primarily designed for fusing crisp decisions from the multiple decision sources, can be adapted for fusion of fuzzy decisions as well if such are the inputs from these sources. Examples, which illustrate the benefits and limitations of the crisp and fuzzy versions of the trainable fusion systems, are also included.

  19. Multiclassifier information fusion methods for microarray pattern recognition

    NASA Astrophysics Data System (ADS)

    Braun, Jerome J.; Glina, Yan; Judson, Nicholas; Herzig-Marx, Rachel

    2004-04-01

    This paper addresses automatic recognition of microarray patterns, a capability that could have a major significance for medical diagnostics, enabling development of diagnostic tools for automatic discrimination of specific diseases. The paper presents multiclassifier information fusion methods for microarray pattern recognition. The input space partitioning approach based on fitness measures that constitute an a-priori gauging of classification efficacy for each subspace is investigated. Methods for generation of fitness measures, generation of input subspaces and their use in the multiclassifier fusion architecture are presented. In particular, two-level quantification of fitness that accounts for the quality of each subspace as well as the quality of individual neighborhoods within the subspace is described. Individual-subspace classifiers are Support Vector Machine based. The decision fusion stage fuses the information from mulitple SVMs along with the multi-level fitness information. Final decision fusion stage techniques, including weighted fusion as well as Dempster-Shafer theory based fusion are investigated. It should be noted that while the above methods are discussed in the context of microarray pattern recognition, they are applicable to a broader range of discrimination problems, in particular to problems involving a large number of information sources irreducible to a low-dimensional feature space.

  20. Fusion of infrared and visible images based on saliency scale-space in frequency domain

    NASA Astrophysics Data System (ADS)

    Chen, Yanfei; Sang, Nong; Dan, Zhiping

    2015-12-01

    A fusion algorithm of infrared and visible images based on saliency scale-space in the frequency domain was proposed. Focus of human attention is directed towards the salient targets which interpret the most important information in the image. For the given registered infrared and visible images, firstly, visual features are extracted to obtain the input hypercomplex matrix. Secondly, the Hypercomplex Fourier Transform (HFT) is used to obtain the salient regions of the infrared and visible images respectively, the convolution of the input hypercomplex matrix amplitude spectrum with a low-pass Gaussian kernel of an appropriate scale which is equivalent to an image saliency detector are done. The saliency maps are obtained by reconstructing the 2D signal using the original phase and the amplitude spectrum, filtered at a scale selected by minimizing saliency map entropy. Thirdly, the salient regions are fused with the adoptive weighting fusion rules, and the nonsalient regions are fused with the rule based on region energy (RE) and region sharpness (RS), then the fused image is obtained. Experimental results show that the presented algorithm can hold high spectrum information of the visual image, and effectively get the thermal targets information at different scales of the infrared image.

  1. Optical Associative Processors For Visual Perception"

    NASA Astrophysics Data System (ADS)

    Casasent, David; Telfer, Brian

    1988-05-01

    We consider various associative processor modifications required to allow these systems to be used for visual perception, scene analysis, and object recognition. For these applications, decisions on the class of the objects present in the input image are required and thus heteroassociative memories are necessary (rather than the autoassociative memories that have been given most attention). We analyze the performance of both associative processors and note that there is considerable difference between heteroassociative and autoassociative memories. We describe associative processors suitable for realizing functions such as: distortion invariance (using linear discriminant function memory synthesis techniques), noise and image processing performance (using autoassociative memories in cascade with with a heteroassociative processor and with a finite number of autoassociative memory iterations employed), shift invariance (achieved through the use of associative processors operating on feature space data), and the analysis of multiple objects in high noise (which is achieved using associative processing of the output from symbolic correlators). We detail and provide initial demonstrations of the use of associative processors operating on iconic, feature space and symbolic data, as well as adaptive associative processors.

  2. Use of an engineering data management system in the analysis of space shuttle orbiter tiles

    NASA Technical Reports Server (NTRS)

    Giles, G. L.; Vallas, M.

    1981-01-01

    The use of an engineering data management system to facilitate the extensive stress analyses of the space shuttle orbiter thermal protection system is demonstrated. The methods used to gather, organize, and store the data; to query data interactively; to generate graphic displays of the data; and to access, transform, and prepare the data for input to a stress analysis program are described. Information related to many separate tiles can be accessed individually from the data base which has a natural organization from an engineering viewpoint. The flexible user features of the system facilitate changes in data content and organization which occur during the development and refinement of the tile analysis procedure. Additionally, the query language supports retrieval of data to satisfy a variety of user-specified conditions.

  3. Topographic mapping--the olfactory system.

    PubMed

    Imai, Takeshi; Sakano, Hitoshi; Vosshall, Leslie B

    2010-08-01

    Sensory systems must map accurate representations of the external world in the brain. Although the physical senses of touch and vision build topographic representations of the spatial coordinates of the body and the field of view, the chemical sense of olfaction maps discontinuous features of chemical space, comprising an extremely large number of possible odor stimuli. In both mammals and insects, olfactory circuits are wired according to the convergence of axons from sensory neurons expressing the same odorant receptor. Synapses are organized into distinctive spherical neuropils--the olfactory glomeruli--that connect sensory input with output neurons and local modulatory interneurons. Although there is a strong conservation of form in the olfactory maps of mammals and insects, they arise using divergent mechanisms. Olfactory glomeruli provide a unique solution to the problem of mapping discontinuous chemical space onto the brain.

  4. Water recovery and management test support modeling for Space Station Freedom

    NASA Technical Reports Server (NTRS)

    Mohamadinejad, Habib; Bacskay, Allen S.

    1990-01-01

    The water-recovery and management (WRM) subsystem proposed for the Space Station Freedom program is outlined, and its computerized modeling and simulation based on a Computer Aided System Engineering and Analysis (CASE/A) program are discussed. A WRM test model consisting of a pretreated urine processing (TIMES), hygiene water processing (RO), RO brine processing using TIMES, and hygiene water storage is presented. Attention is drawn to such end-user equipment characteristics as the shower, dishwasher, clotheswasher, urine-collection facility, and handwash. The transient behavior of pretreated-urine, RO waste-hygiene, and RO brine tanks is assessed, as well as the total input/output to or from the system. The model is considered to be beneficial for pretest analytical predictions as a program cost-saving feature.

  5. Space Tug Avionics Definition Study. Volume 5: Cost and Programmatics

    NASA Technical Reports Server (NTRS)

    1975-01-01

    The baseline avionics system features a central digital computer that integrates the functions of all the space tug subsystems by means of a redundant digital data bus. The central computer consists of dual central processor units, dual input/output processors, and a fault tolerant memory, utilizing internal redundancy and error checking. Three electronically steerable phased arrays provide downlink transmission from any tug attitude directly to ground or via TDRS. Six laser gyros and six accelerometers in a dodecahedron configuration make up the inertial measurement unit. Both a scanning laser radar and a TV system, employing strobe lamps, are required as acquisition and docking sensors. Primary dc power at a nominal 28 volts is supplied from dual lightweight, thermally integrated fuel cells which operate from propellant grade reactants out of the main tanks.

  6. Pattern classification using an olfactory model with PCA feature selection in electronic noses: study and application.

    PubMed

    Fu, Jun; Huang, Canqin; Xing, Jianguo; Zheng, Junbao

    2012-01-01

    Biologically-inspired models and algorithms are considered as promising sensor array signal processing methods for electronic noses. Feature selection is one of the most important issues for developing robust pattern recognition models in machine learning. This paper describes an investigation into the classification performance of a bionic olfactory model with the increase of the dimensions of input feature vector (outer factor) as well as its parallel channels (inner factor). The principal component analysis technique was applied for feature selection and dimension reduction. Two data sets of three classes of wine derived from different cultivars and five classes of green tea derived from five different provinces of China were used for experiments. In the former case the results showed that the average correct classification rate increased as more principal components were put in to feature vector. In the latter case the results showed that sufficient parallel channels should be reserved in the model to avoid pattern space crowding. We concluded that 6~8 channels of the model with principal component feature vector values of at least 90% cumulative variance is adequate for a classification task of 3~5 pattern classes considering the trade-off between time consumption and classification rate.

  7. Feature Interactions Enable Decoding of Sensorimotor Transformations for Goal-Directed Movement

    PubMed Central

    Barany, Deborah A.; Della-Maggiore, Valeria; Viswanathan, Shivakumar; Cieslak, Matthew

    2014-01-01

    Neurophysiology and neuroimaging evidence shows that the brain represents multiple environmental and body-related features to compute transformations from sensory input to motor output. However, it is unclear how these features interact during goal-directed movement. To investigate this issue, we examined the representations of sensory and motor features of human hand movements within the left-hemisphere motor network. In a rapid event-related fMRI design, we measured cortical activity as participants performed right-handed movements at the wrist, with either of two postures and two amplitudes, to move a cursor to targets at different locations. Using a multivoxel analysis technique with rigorous generalization tests, we reliably distinguished representations of task-related features (primarily target location, movement direction, and posture) in multiple regions. In particular, we identified an interaction between target location and movement direction in the superior parietal lobule, which may underlie a transformation from the location of the target in space to a movement vector. In addition, we found an influence of posture on primary motor, premotor, and parietal regions. Together, these results reveal the complex interactions between different sensory and motor features that drive the computation of sensorimotor transformations. PMID:24828640

  8. Input Decimated Ensembles

    NASA Technical Reports Server (NTRS)

    Tumer, Kagan; Oza, Nikunj C.; Clancy, Daniel (Technical Monitor)

    2001-01-01

    Using an ensemble of classifiers instead of a single classifier has been shown to improve generalization performance in many pattern recognition problems. However, the extent of such improvement depends greatly on the amount of correlation among the errors of the base classifiers. Therefore, reducing those correlations while keeping the classifiers' performance levels high is an important area of research. In this article, we explore input decimation (ID), a method which selects feature subsets for their ability to discriminate among the classes and uses them to decouple the base classifiers. We provide a summary of the theoretical benefits of correlation reduction, along with results of our method on two underwater sonar data sets, three benchmarks from the Probenl/UCI repositories, and two synthetic data sets. The results indicate that input decimated ensembles (IDEs) outperform ensembles whose base classifiers use all the input features; randomly selected subsets of features; and features created using principal components analysis, on a wide range of domains.

  9. A nonlinear control scheme based on dynamic evolution path theory for improved dynamic performance of boost PFC converter working on nonlinear features.

    PubMed

    Mohanty, Pratap Ranjan; Panda, Anup Kumar

    2016-11-01

    This paper is concerned to performance improvement of boost PFC converter under large random load fluctuation, ensuring unity power factor (UPF) at source end and regulated voltage at load side. To obtain such performance, a nonlinear controller based on dynamic evolution path theory is designed and its robustness is examined under both heavy and light loading condition. In this paper, %THD and zero-cross-over dead-zone of input current is significantly reduced. Also, very less response time of input current and output voltage to that of load and reference variation is remarked. A simulation model of proposed system is designed and it is realized using dSPACE 1104 signal processor for a 390V DC , 500W prototype. The relevant experimental and simulation waveforms are presented. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  10. Virtual Earth System Laboratory (VESL): Effective Visualization of Earth System Data and Process Simulations

    NASA Astrophysics Data System (ADS)

    Quinn, J. D.; Larour, E. Y.; Cheng, D. L. C.; Halkides, D. J.

    2016-12-01

    The Virtual Earth System Laboratory (VESL) is a Web-based tool, under development at the Jet Propulsion Laboratory and UC Irvine, for the visualization of Earth System data and process simulations. It contains features geared toward a range of applications, spanning research and outreach. It offers an intuitive user interface, in which model inputs are changed using sliders and other interactive components. Current capabilities include simulation of polar ice sheet responses to climate forcing, based on NASA's Ice Sheet System Model (ISSM). We believe that the visualization of data is most effective when tailored to the target audience, and that many of the best practices for modern Web design/development can be applied directly to the visualization of data: use of negative space, color schemes, typography, accessibility standards, tooltips, etc cetera. We present our prototype website, and invite input from potential users, including researchers, educators, and students.

  11. Model reduction of nonsquare linear MIMO systems using multipoint matrix continued-fraction expansions

    NASA Technical Reports Server (NTRS)

    Guo, Tong-Yi; Hwang, Chyi; Shieh, Leang-San

    1994-01-01

    This paper deals with the multipoint Cauer matrix continued-fraction expansion (MCFE) for model reduction of linear multi-input multi-output (MIMO) systems with various numbers of inputs and outputs. A salient feature of the proposed MCFE approach to model reduction of MIMO systems with square transfer matrices is its equivalence to the matrix Pade approximation approach. The Cauer second form of the ordinary MCFE for a square transfer function matrix is generalized in this paper to a multipoint and nonsquare-matrix version. An interesting connection of the multipoint Cauer MCFE method to the multipoint matrix Pade approximation method is established. Also, algorithms for obtaining the reduced-degree matrix-fraction descriptions and reduced-dimensional state-space models from a transfer function matrix via the multipoint Cauer MCFE algorithm are presented. Practical advantages of using the multipoint Cauer MCFE are discussed and a numerical example is provided to illustrate the algorithms.

  12. Variable input observer for state estimation of high-rate dynamics

    NASA Astrophysics Data System (ADS)

    Hong, Jonathan; Cao, Liang; Laflamme, Simon; Dodson, Jacob

    2017-04-01

    High-rate systems operating in the 10 μs to 10 ms timescale are likely to experience damaging effects due to rapid environmental changes (e.g., turbulence, ballistic impact). Some of these systems could benefit from real-time state estimation to enable their full potential. Examples of such systems include blast mitigation strategies, automotive airbag technologies, and hypersonic vehicles. Particular challenges in high-rate state estimation include: 1) complex time varying nonlinearities of system (e.g. noise, uncertainty, and disturbance); 2) rapid environmental changes; 3) requirement of high convergence rate. Here, we propose using a Variable Input Observer (VIO) concept to vary the input space as the event unfolds. When systems experience high-rate dynamics, rapid changes in the system occur. To investigate the VIO's potential, a VIO-based neuro-observer is constructed and studied using experimental data collected from a laboratory impact test. Results demonstrate that the input space is unique to different impact conditions, and that adjusting the input space throughout the dynamic event produces better estimations than using a traditional fixed input space strategy.

  13. Surprise! Infants consider possible bases of generalization for a single input example.

    PubMed

    Gerken, LouAnn; Dawson, Colin; Chatila, Razanne; Tenenbaum, Josh

    2015-01-01

    Infants have been shown to generalize from a small number of input examples. However, existing studies allow two possible means of generalization. One is via a process of noting similarities shared by several examples. Alternatively, generalization may reflect an implicit desire to explain the input. The latter view suggests that generalization might occur when even a single input example is surprising, given the learner's current model of the domain. To test the possibility that infants are able to generalize based on a single example, we familiarized 9-month-olds with a single three-syllable input example that contained either one surprising feature (syllable repetition, Experiment 1) or two features (repetition and a rare syllable, Experiment 2). In both experiments, infants generalized only to new strings that maintained all of the surprising features from familiarization. This research suggests that surprise can promote very rapid generalization. © 2014 John Wiley & Sons Ltd.

  14. Systems and methods for predicting materials properties

    DOEpatents

    Ceder, Gerbrand; Fischer, Chris; Tibbetts, Kevin; Morgan, Dane; Curtarolo, Stefano

    2007-11-06

    Systems and methods for predicting features of materials of interest. Reference data are analyzed to deduce relationships between the input data sets and output data sets. Reference data includes measured values and/or computed values. The deduced relationships can be specified as equations, correspondences, and/or algorithmic processes that produce appropriate output data when suitable input data is used. In some instances, the output data set is a subset of the input data set, and computational results may be refined by optionally iterating the computational procedure. To deduce features of a new material of interest, a computed or measured input property of the material is provided to an equation, correspondence, or algorithmic procedure previously deduced, and an output is obtained. In some instances, the output is iteratively refined. In some instances, new features deduced for the material of interest are added to a database of input and output data for known materials.

  15. SDR input power estimation algorithms

    NASA Astrophysics Data System (ADS)

    Briones, J. C.; Nappier, J. M.

    The General Dynamics (GD) S-Band software defined radio (SDR) in the Space Communications and Navigation (SCAN) Testbed on the International Space Station (ISS) provides experimenters an opportunity to develop and demonstrate experimental waveforms in space. The SDR has an analog and a digital automatic gain control (AGC) and the response of the AGCs to changes in SDR input power and temperature was characterized prior to the launch and installation of the SCAN Testbed on the ISS. The AGCs were used to estimate the SDR input power and SNR of the received signal and the characterization results showed a nonlinear response to SDR input power and temperature. In order to estimate the SDR input from the AGCs, three algorithms were developed and implemented on the ground software of the SCAN Testbed. The algorithms include a linear straight line estimator, which used the digital AGC and the temperature to estimate the SDR input power over a narrower section of the SDR input power range. There is a linear adaptive filter algorithm that uses both AGCs and the temperature to estimate the SDR input power over a wide input power range. Finally, an algorithm that uses neural networks was designed to estimate the input power over a wide range. This paper describes the algorithms in detail and their associated performance in estimating the SDR input power.

  16. SDR Input Power Estimation Algorithms

    NASA Technical Reports Server (NTRS)

    Nappier, Jennifer M.; Briones, Janette C.

    2013-01-01

    The General Dynamics (GD) S-Band software defined radio (SDR) in the Space Communications and Navigation (SCAN) Testbed on the International Space Station (ISS) provides experimenters an opportunity to develop and demonstrate experimental waveforms in space. The SDR has an analog and a digital automatic gain control (AGC) and the response of the AGCs to changes in SDR input power and temperature was characterized prior to the launch and installation of the SCAN Testbed on the ISS. The AGCs were used to estimate the SDR input power and SNR of the received signal and the characterization results showed a nonlinear response to SDR input power and temperature. In order to estimate the SDR input from the AGCs, three algorithms were developed and implemented on the ground software of the SCAN Testbed. The algorithms include a linear straight line estimator, which used the digital AGC and the temperature to estimate the SDR input power over a narrower section of the SDR input power range. There is a linear adaptive filter algorithm that uses both AGCs and the temperature to estimate the SDR input power over a wide input power range. Finally, an algorithm that uses neural networks was designed to estimate the input power over a wide range. This paper describes the algorithms in detail and their associated performance in estimating the SDR input power.

  17. Constraint programming based biomarker optimization.

    PubMed

    Zhou, Manli; Luo, Youxi; Sun, Guoquan; Mai, Guoqin; Zhou, Fengfeng

    2015-01-01

    Efficient and intuitive characterization of biological big data is becoming a major challenge for modern bio-OMIC based scientists. Interactive visualization and exploration of big data is proven to be one of the successful solutions. Most of the existing feature selection algorithms do not allow the interactive inputs from users in the optimizing process of feature selection. This study investigates this question as fixing a few user-input features in the finally selected feature subset and formulates these user-input features as constraints for a programming model. The proposed algorithm, fsCoP (feature selection based on constrained programming), performs well similar to or much better than the existing feature selection algorithms, even with the constraints from both literature and the existing algorithms. An fsCoP biomarker may be intriguing for further wet lab validation, since it satisfies both the classification optimization function and the biomedical knowledge. fsCoP may also be used for the interactive exploration of bio-OMIC big data by interactively adding user-defined constraints for modeling.

  18. Identification of input variables for feature based artificial neural networks-saccade detection in EOG recordings.

    PubMed

    Tigges, P; Kathmann, N; Engel, R R

    1997-07-01

    Though artificial neural networks (ANN) are excellent tools for pattern recognition problems when signal to noise ratio is low, the identification of decision relevant features for ANN input data is still a crucial issue. The experience of the ANN designer and the existing knowledge and understanding of the problem seem to be the only links for a specific construction. In the present study a backpropagation ANN based on modified raw data inputs showed encouraging results. Investigating the specific influences of prototypical input patterns on a specially designed ANN led to a new sparse and efficient input data presentation. This data coding obtained by a semiautomatic procedure combining existing expert knowledge and the internal representation structures of the raw data based ANN yielded a list of feature vectors, each representing the relevant information for saccade identification. The feature based ANN produced a reduction of the error rate of nearly 40% compared with the raw data ANN. An overall correct classification of 92% of so far unknown data was realized. The proposed method of extracting internal ANN knowledge for the production of a better input data representation is not restricted to EOG recordings, and could be used in various fields of signal analysis.

  19. Progressively expanded neural network for automatic material identification in hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Paheding, Sidike

    The science of hyperspectral remote sensing focuses on the exploitation of the spectral signatures of various materials to enhance capabilities including object detection, recognition, and material characterization. Hyperspectral imagery (HSI) has been extensively used for object detection and identification applications since it provides plenty of spectral information to uniquely identify materials by their reflectance spectra. HSI-based object detection algorithms can be generally classified into stochastic and deterministic approaches. Deterministic approaches are comparatively simple to apply since it is usually based on direct spectral similarity such as spectral angles or spectral correlation. In contrast, stochastic algorithms require statistical modeling and estimation for target class and non-target class. Over the decades, many single class object detection methods have been proposed in the literature, however, deterministic multiclass object detection in HSI has not been explored. In this work, we propose a deterministic multiclass object detection scheme, named class-associative spectral fringe-adjusted joint transform correlation. Human brain is capable of simultaneously processing high volumes of multi-modal data received every second of the day. In contrast, a machine sees input data simply as random binary numbers. Although machines are computationally efficient, they are inferior when comes to data abstraction and interpretation. Thus, mimicking the learning strength of human brain has been current trend in artificial intelligence. In this work, we present a biological inspired neural network, named progressively expanded neural network (PEN Net), based on nonlinear transformation of input neurons to a feature space for better pattern differentiation. In PEN Net, discrete fixed excitations are disassembled and scattered in the feature space as a nonlinear line. Each disassembled element on the line corresponds to a pattern with similar features. Unlike the conventional neural network where hidden neurons need to be iteratively adjusted to achieve better accuracy, our proposed PEN Net does not require hidden neurons tuning which achieves better computational efficiency, and it has also shown superior performance in HSI classification tasks compared to the state-of-the-arts. Spectral-spatial features based HSI classification framework has shown stronger strength compared to spectral-only based methods. In our lastly proposed technique, PEN Net is incorporated with multiscale spatial features (i.e., multiscale complete local binary pattern) to perform a spectral-spatial classification of HSI. Several experiments demonstrate excellent performance of our proposed technique compared to the more recent developed approaches.

  20. Palmprint and Face Multi-Modal Biometric Recognition Based on SDA-GSVD and Its Kernelization

    PubMed Central

    Jing, Xiao-Yuan; Li, Sheng; Li, Wen-Qian; Yao, Yong-Fang; Lan, Chao; Lu, Jia-Sen; Yang, Jing-Yu

    2012-01-01

    When extracting discriminative features from multimodal data, current methods rarely concern themselves with the data distribution. In this paper, we present an assumption that is consistent with the viewpoint of discrimination, that is, a person's overall biometric data should be regarded as one class in the input space, and his different biometric data can form different Gaussians distributions, i.e., different subclasses. Hence, we propose a novel multimodal feature extraction and recognition approach based on subclass discriminant analysis (SDA). Specifically, one person's different bio-data are treated as different subclasses of one class, and a transformed space is calculated, where the difference among subclasses belonging to different persons is maximized, and the difference within each subclass is minimized. Then, the obtained multimodal features are used for classification. Two solutions are presented to overcome the singularity problem encountered in calculation, which are using PCA preprocessing, and employing the generalized singular value decomposition (GSVD) technique, respectively. Further, we provide nonlinear extensions of SDA based multimodal feature extraction, that is, the feature fusion based on KPCA-SDA and KSDA-GSVD. In KPCA-SDA, we first apply Kernel PCA on each single modal before performing SDA. While in KSDA-GSVD, we directly perform Kernel SDA to fuse multimodal data by applying GSVD to avoid the singular problem. For simplicity two typical types of biometric data are considered in this paper, i.e., palmprint data and face data. Compared with several representative multimodal biometrics recognition methods, experimental results show that our approaches outperform related multimodal recognition methods and KSDA-GSVD achieves the best recognition performance. PMID:22778600

  1. Palmprint and face multi-modal biometric recognition based on SDA-GSVD and its kernelization.

    PubMed

    Jing, Xiao-Yuan; Li, Sheng; Li, Wen-Qian; Yao, Yong-Fang; Lan, Chao; Lu, Jia-Sen; Yang, Jing-Yu

    2012-01-01

    When extracting discriminative features from multimodal data, current methods rarely concern themselves with the data distribution. In this paper, we present an assumption that is consistent with the viewpoint of discrimination, that is, a person's overall biometric data should be regarded as one class in the input space, and his different biometric data can form different Gaussians distributions, i.e., different subclasses. Hence, we propose a novel multimodal feature extraction and recognition approach based on subclass discriminant analysis (SDA). Specifically, one person's different bio-data are treated as different subclasses of one class, and a transformed space is calculated, where the difference among subclasses belonging to different persons is maximized, and the difference within each subclass is minimized. Then, the obtained multimodal features are used for classification. Two solutions are presented to overcome the singularity problem encountered in calculation, which are using PCA preprocessing, and employing the generalized singular value decomposition (GSVD) technique, respectively. Further, we provide nonlinear extensions of SDA based multimodal feature extraction, that is, the feature fusion based on KPCA-SDA and KSDA-GSVD. In KPCA-SDA, we first apply Kernel PCA on each single modal before performing SDA. While in KSDA-GSVD, we directly perform Kernel SDA to fuse multimodal data by applying GSVD to avoid the singular problem. For simplicity two typical types of biometric data are considered in this paper, i.e., palmprint data and face data. Compared with several representative multimodal biometrics recognition methods, experimental results show that our approaches outperform related multimodal recognition methods and KSDA-GSVD achieves the best recognition performance.

  2. An Extended Normalization Model of Attention Accounts for Feature-Based Attentional Enhancement of Both Response and Coherence Gain

    PubMed Central

    Krishna, B. Suresh; Treue, Stefan

    2016-01-01

    Paying attention to a sensory feature improves its perception and impairs that of others. Recent work has shown that a Normalization Model of Attention (NMoA) can account for a wide range of physiological findings and the influence of different attentional manipulations on visual performance. A key prediction of the NMoA is that attention to a visual feature like an orientation or a motion direction will increase the response of neurons preferring the attended feature (response gain) rather than increase the sensory input strength of the attended stimulus (input gain). This effect of feature-based attention on neuronal responses should translate to similar patterns of improvement in behavioral performance, with psychometric functions showing response gain rather than input gain when attention is directed to the task-relevant feature. In contrast, we report here that when human subjects are cued to attend to one of two motion directions in a transparent motion display, attentional effects manifest as a combination of input and response gain. Further, the impact on input gain is greater when attention is directed towards a narrow range of motion directions than when it is directed towards a broad range. These results are captured by an extended NMoA, which either includes a stimulus-independent attentional contribution to normalization or utilizes direction-tuned normalization. The proposed extensions are consistent with the feature-similarity gain model of attention and the attentional modulation in extrastriate area MT, where neuronal responses are enhanced and suppressed by attention to preferred and non-preferred motion directions respectively. PMID:27977679

  3. Contribution of intrinsic properties and synaptic inputs to motoneuron discharge patterns: a simulation study

    PubMed Central

    ElBasiouny, Sherif M.; Rymer, W. Zev; Heckman, C. J.

    2012-01-01

    Motoneuron discharge patterns reflect the interaction of synaptic inputs with intrinsic conductances. Recent work has focused on the contribution of conductances mediating persistent inward currents (PICs), which amplify and prolong the effects of synaptic inputs on motoneuron discharge. Certain features of human motor unit discharge are thought to reflect a relatively stereotyped activation of PICs by excitatory synaptic inputs; these features include rate saturation and de-recruitment at a lower level of net excitation than that required for recruitment. However, PIC activation is also influenced by the pattern and spatial distribution of inhibitory inputs that are activated concurrently with excitatory inputs. To estimate the potential contributions of PIC activation and synaptic input patterns to motor unit discharge patterns, we examined the responses of a set of cable motoneuron models to different patterns of excitatory and inhibitory inputs. The models were first tuned to approximate the current- and voltage-clamp responses of low- and medium-threshold spinal motoneurons studied in decerebrate cats and then driven with different patterns of excitatory and inhibitory inputs. The responses of the models to excitatory inputs reproduced a number of features of human motor unit discharge. However, the pattern of rate modulation was strongly influenced by the temporal and spatial pattern of concurrent inhibitory inputs. Thus, even though PIC activation is likely to exert a strong influence on firing rate modulation, PIC activation in combination with different patterns of excitatory and inhibitory synaptic inputs can produce a wide variety of motor unit discharge patterns. PMID:22031773

  4. A judicious multiple hypothesis tracker with interacting feature extraction

    NASA Astrophysics Data System (ADS)

    McAnanama, James G.; Kirubarajan, T.

    2009-05-01

    The multiple hypotheses tracker (mht) is recognized as an optimal tracking method due to the enumeration of all possible measurement-to-track associations, which does not involve any approximation in its original formulation. However, its practical implementation is limited by the NP-hard nature of this enumeration. As a result, a number of maintenance techniques such as pruning and merging have been proposed to bound the computational complexity. It is possible to improve the performance of a tracker, mht or not, using feature information (e.g., signal strength, size, type) in addition to kinematic data. However, in most tracking systems, the extraction of features from the raw sensor data is typically independent of the subsequent association and filtering stages. In this paper, a new approach, called the Judicious Multi Hypotheses Tracker (jmht), whereby there is an interaction between feature extraction and the mht, is presented. The measure of the quality of feature extraction is input into measurement-to-track association while the prediction step feeds back the parameters to be used in the next round of feature extraction. The motivation for this forward and backward interaction between feature extraction and tracking is to improve the performance in both steps. This approach allows for a more rational partitioning of the feature space and removes unlikely features from the assignment problem. Simulation results demonstrate the benefits of the proposed approach.

  5. Optimal Prediction in the Retina and Natural Motion Statistics

    NASA Astrophysics Data System (ADS)

    Salisbury, Jared M.; Palmer, Stephanie E.

    2016-03-01

    Almost all behaviors involve making predictions. Whether an organism is trying to catch prey, avoid predators, or simply move through a complex environment, the organism uses the data it collects through its senses to guide its actions by extracting from these data information about the future state of the world. A key aspect of the prediction problem is that not all features of the past sensory input have predictive power, and representing all features of the external sensory world is prohibitively costly both due to space and metabolic constraints. This leads to the hypothesis that neural systems are optimized for prediction. Here we describe theoretical and computational efforts to define and quantify the efficient representation of the predictive information by the brain. Another important feature of the prediction problem is that the physics of the world is diverse enough to contain a wide range of possible statistical ensembles, yet not all inputs are probable. Thus, the brain might not be a generalized predictive machine; it might have evolved to specifically solve the prediction problems most common in the natural environment. This paper summarizes recent results on predictive coding and optimal predictive information in the retina and suggests approaches for quantifying prediction in response to natural motion. Basic statistics of natural movies reveal that general patterns of spatiotemporal correlation are present across a wide range of scenes, though individual differences in motion type may be important for optimal processing of motion in a given ecological niche.

  6. Analysis of nystagmus response to a pseudorandom velocity input

    NASA Technical Reports Server (NTRS)

    Lessard, C. S.

    1986-01-01

    Space motion sickness was not reported during the first Apollo missions; however, since Apollo 8 through the current Shuttle and Skylab missions, approximately 50% of the crewmembers have experienced instances of space motion sickness. Space motion sickness, renamed space adaptation syndrome, occurs primarily during the initial period of a mission until habilation takes place. One of NASA's efforts to resolve the space adaptation syndrome is to model the individual's vestibular response for basis knowledge and as a possible predictor of an individual's susceptibility to the disorder. This report describes a method to analyse the vestibular system when subjected to a pseudorandom angular velocity input. A sum of sinusoids (pseudorandom) input lends itself to analysis by linear frequency methods. Resultant horizontal ocular movements were digitized, filtered and transformed into the frequency domain. Programs were developed and evaluated to obtain the (1) auto spectra of input stimulus and resultant ocular resonse, (2) cross spectra, (3) the estimated vestibular-ocular system transfer function gain and phase, and (4) coherence function between stimulus and response functions.

  7. Feature generation using genetic programming with application to fault classification.

    PubMed

    Guo, Hong; Jack, Lindsay B; Nandi, Asoke K

    2005-02-01

    One of the major challenges in pattern recognition problems is the feature extraction process which derives new features from existing features, or directly from raw data in order to reduce the cost of computation during the classification process, while improving classifier efficiency. Most current feature extraction techniques transform the original pattern vector into a new vector with increased discrimination capability but lower dimensionality. This is conducted within a predefined feature space, and thus, has limited searching power. Genetic programming (GP) can generate new features from the original dataset without prior knowledge of the probabilistic distribution. In this paper, a GP-based approach is developed for feature extraction from raw vibration data recorded from a rotating machine with six different conditions. The created features are then used as the inputs to a neural classifier for the identification of six bearing conditions. Experimental results demonstrate the ability of GP to discover autimatically the different bearing conditions using features expressed in the form of nonlinear functions. Furthermore, four sets of results--using GP extracted features with artificial neural networks (ANN) and support vector machines (SVM), as well as traditional features with ANN and SVM--have been obtained. This GP-based approach is used for bearing fault classification for the first time and exhibits superior searching power over other techniques. Additionaly, it significantly reduces the time for computation compared with genetic algorithm (GA), therefore, makes a more practical realization of the solution.

  8. A Kernel-Based Low-Rank (KLR) Model for Low-Dimensional Manifold Recovery in Highly Accelerated Dynamic MRI.

    PubMed

    Nakarmi, Ukash; Wang, Yanhua; Lyu, Jingyuan; Liang, Dong; Ying, Leslie

    2017-11-01

    While many low rank and sparsity-based approaches have been developed for accelerated dynamic magnetic resonance imaging (dMRI), they all use low rankness or sparsity in input space, overlooking the intrinsic nonlinear correlation in most dMRI data. In this paper, we propose a kernel-based framework to allow nonlinear manifold models in reconstruction from sub-Nyquist data. Within this framework, many existing algorithms can be extended to kernel framework with nonlinear models. In particular, we have developed a novel algorithm with a kernel-based low-rank model generalizing the conventional low rank formulation. The algorithm consists of manifold learning using kernel, low rank enforcement in feature space, and preimaging with data consistency. Extensive simulation and experiment results show that the proposed method surpasses the conventional low-rank-modeled approaches for dMRI.

  9. Skin lesion computational diagnosis of dermoscopic images: Ensemble models based on input feature manipulation.

    PubMed

    Oliveira, Roberta B; Pereira, Aledir S; Tavares, João Manuel R S

    2017-10-01

    The number of deaths worldwide due to melanoma has risen in recent times, in part because melanoma is the most aggressive type of skin cancer. Computational systems have been developed to assist dermatologists in early diagnosis of skin cancer, or even to monitor skin lesions. However, there still remains a challenge to improve classifiers for the diagnosis of such skin lesions. The main objective of this article is to evaluate different ensemble classification models based on input feature manipulation to diagnose skin lesions. Input feature manipulation processes are based on feature subset selections from shape properties, colour variation and texture analysis to generate diversity for the ensemble models. Three subset selection models are presented here: (1) a subset selection model based on specific feature groups, (2) a correlation-based subset selection model, and (3) a subset selection model based on feature selection algorithms. Each ensemble classification model is generated using an optimum-path forest classifier and integrated with a majority voting strategy. The proposed models were applied on a set of 1104 dermoscopic images using a cross-validation procedure. The best results were obtained by the first ensemble classification model that generates a feature subset ensemble based on specific feature groups. The skin lesion diagnosis computational system achieved 94.3% accuracy, 91.8% sensitivity and 96.7% specificity. The input feature manipulation process based on specific feature subsets generated the greatest diversity for the ensemble classification model with very promising results. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Three-dimensional face model reproduction method using multiview images

    NASA Astrophysics Data System (ADS)

    Nagashima, Yoshio; Agawa, Hiroshi; Kishino, Fumio

    1991-11-01

    This paper describes a method of reproducing three-dimensional face models using multi-view images for a virtual space teleconferencing system that achieves a realistic visual presence for teleconferencing. The goal of this research, as an integral component of a virtual space teleconferencing system, is to generate a three-dimensional face model from facial images, synthesize images of the model virtually viewed from different angles, and with natural shadow to suit the lighting conditions of the virtual space. The proposed method is as follows: first, front and side view images of the human face are taken by TV cameras. The 3D data of facial feature points are obtained from front- and side-views by an image processing technique based on the color, shape, and correlation of face components. Using these 3D data, the prepared base face models, representing typical Japanese male and female faces, are modified to approximate the input facial image. The personal face model, representing the individual character, is then reproduced. Next, an oblique view image is taken by TV camera. The feature points of the oblique view image are extracted using the same image processing technique. A more precise personal model is reproduced by fitting the boundary of the personal face model to the boundary of the oblique view image. The modified boundary of the personal face model is determined by using face direction, namely rotation angle, which is detected based on the extracted feature points. After the 3D model is established, the new images are synthesized by mapping facial texture onto the model.

  11. An evaluation of open set recognition for FLIR images

    NASA Astrophysics Data System (ADS)

    Scherreik, Matthew; Rigling, Brian

    2015-05-01

    Typical supervised classification algorithms label inputs according to what was learned in a training phase. Thus, test inputs that were not seen in training are always given incorrect labels. Open set recognition algorithms address this issue by accounting for inputs that are not present in training and providing the classifier with an option to reject" unknown samples. A number of such techniques have been developed in the literature, many of which are based on support vector machines (SVMs). One approach, the 1-vs-set machine, constructs a slab" in feature space using the SVM hyperplane. Inputs falling on one side of the slab or within the slab belong to a training class, while inputs falling on the far side of the slab are rejected. We note that rejection of unknown inputs can be achieved by thresholding class posterior probabilities. Another recently developed approach, the Probabilistic Open Set SVM (POS-SVM), empirically determines good probability thresholds. We apply the 1-vs-set machine, POS-SVM, and closed set SVMs to FLIR images taken from the Comanche SIG dataset. Vehicles in the dataset are divided into three general classes: wheeled, armored personnel carrier (APC), and tank. For each class, a coarse pose estimate (front, rear, left, right) is taken. In a closed set sense, we analyze these algorithms for prediction of vehicle class and pose. To test open set performance, one or more vehicle classes are held out from training. By considering closed and open set performance separately, we may closely analyze both inter-class discrimination and threshold effectiveness.

  12. Robotics control using isolated word recognition of voice input

    NASA Technical Reports Server (NTRS)

    Weiner, J. M.

    1977-01-01

    A speech input/output system is presented that can be used to communicate with a task oriented system. Human speech commands and synthesized voice output extend conventional information exchange capabilities between man and machine by utilizing audio input and output channels. The speech input facility is comprised of a hardware feature extractor and a microprocessor implemented isolated word or phrase recognition system. The recognizer offers a medium sized (100 commands), syntactically constrained vocabulary, and exhibits close to real time performance. The major portion of the recognition processing required is accomplished through software, minimizing the complexity of the hardware feature extractor.

  13. T-ray relevant frequencies for osteosarcoma classification

    NASA Astrophysics Data System (ADS)

    Withayachumnankul, W.; Ferguson, B.; Rainsford, T.; Findlay, D.; Mickan, S. P.; Abbott, D.

    2006-01-01

    We investigate the classification of the T-ray response of normal human bone cells and human osteosarcoma cells, grown in culture. Given the magnitude and phase responses within a reliable spectral range as features for input vectors, a trained support vector machine can correctly classify the two cell types to some extent. Performance of the support vector machine is deteriorated by the curse of dimensionality, resulting from the comparatively large number of features in the input vectors. Feature subset selection methods are used to select only an optimal number of relevant features for inputs. As a result, an improvement in generalization performance is attainable, and the selected frequencies can be used for further describing different mechanisms of the cells, responding to T-rays. We demonstrate a consistent classification accuracy of 89.6%, while the only one fifth of the original features are retained in the data set.

  14. Face recognition: a convolutional neural-network approach.

    PubMed

    Lawrence, S; Giles, C L; Tsoi, A C; Back, A D

    1997-01-01

    We present a hybrid neural-network for human face recognition which compares favourably with other methods. The system combines local image sampling, a self-organizing map (SOM) neural network, and a convolutional neural network. The SOM provides a quantization of the image samples into a topological space where inputs that are nearby in the original space are also nearby in the output space, thereby providing dimensionality reduction and invariance to minor changes in the image sample, and the convolutional neural network provides partial invariance to translation, rotation, scale, and deformation. The convolutional network extracts successively larger features in a hierarchical set of layers. We present results using the Karhunen-Loeve transform in place of the SOM, and a multilayer perceptron (MLP) in place of the convolutional network for comparison. We use a database of 400 images of 40 individuals which contains quite a high degree of variability in expression, pose, and facial details. We analyze the computational complexity and discuss how new classes could be added to the trained recognizer.

  15. Capturing Requirements for Autonomous Spacecraft with Autonomy Requirements Engineering

    NASA Astrophysics Data System (ADS)

    Vassev, Emil; Hinchey, Mike

    2014-08-01

    The Autonomy Requirements Engineering (ARE) approach has been developed by Lero - the Irish Software Engineering Research Center within the mandate of a joint project with ESA, the European Space Agency. The approach is intended to help engineers develop missions for unmanned exploration, often with limited or no human control. Such robotics space missions rely on the most recent advances in automation and robotic technologies where autonomy and autonomic computing principles drive the design and implementation of unmanned spacecraft [1]. To tackle the integration and promotion of autonomy in software-intensive systems, ARE combines generic autonomy requirements (GAR) with goal-oriented requirements engineering (GORE). Using this approach, software engineers can determine what autonomic features to develop for a particular system (e.g., a space mission) as well as what artifacts that process might generate (e.g., goals models, requirements specification, etc.). The inputs required by this approach are the mission goals and the domain-specific GAR reflecting specifics of the mission class (e.g., interplanetary missions).

  16. Identifying quantum phase transitions with adversarial neural networks

    NASA Astrophysics Data System (ADS)

    Huembeli, Patrick; Dauphin, Alexandre; Wittek, Peter

    2018-04-01

    The identification of phases of matter is a challenging task, especially in quantum mechanics, where the complexity of the ground state appears to grow exponentially with the size of the system. Traditionally, physicists have to identify the relevant order parameters for the classification of the different phases. We here follow a radically different approach: we address this problem with a state-of-the-art deep learning technique, adversarial domain adaptation. We derive the phase diagram of the whole parameter space starting from a fixed and known subspace using unsupervised learning. This method has the advantage that the input of the algorithm can be directly the ground state without any ad hoc feature engineering. Furthermore, the dimension of the parameter space is unrestricted. More specifically, the input data set contains both labeled and unlabeled data instances. The first kind is a system that admits an accurate analytical or numerical solution, and one can recover its phase diagram. The second type is the physical system with an unknown phase diagram. Adversarial domain adaptation uses both types of data to create invariant feature extracting layers in a deep learning architecture. Once these layers are trained, we can attach an unsupervised learner to the network to find phase transitions. We show the success of this technique by applying it on several paradigmatic models: the Ising model with different temperatures, the Bose-Hubbard model, and the Su-Schrieffer-Heeger model with disorder. The method finds unknown transitions successfully and predicts transition points in close agreement with standard methods. This study opens the door to the classification of physical systems where the phase boundaries are complex such as the many-body localization problem or the Bose glass phase.

  17. A Theory of How Columns in the Neocortex Enable Learning the Structure of the World

    PubMed Central

    Hawkins, Jeff; Ahmad, Subutai; Cui, Yuwei

    2017-01-01

    Neocortical regions are organized into columns and layers. Connections between layers run mostly perpendicular to the surface suggesting a columnar functional organization. Some layers have long-range excitatory lateral connections suggesting interactions between columns. Similar patterns of connectivity exist in all regions but their exact role remain a mystery. In this paper, we propose a network model composed of columns and layers that performs robust object learning and recognition. Each column integrates its changing input over time to learn complete predictive models of observed objects. Excitatory lateral connections across columns allow the network to more rapidly infer objects based on the partial knowledge of adjacent columns. Because columns integrate input over time and space, the network learns models of complex objects that extend well beyond the receptive field of individual cells. Our network model introduces a new feature to cortical columns. We propose that a representation of location relative to the object being sensed is calculated within the sub-granular layers of each column. The location signal is provided as an input to the network, where it is combined with sensory data. Our model contains two layers and one or more columns. Simulations show that using Hebbian-like learning rules small single-column networks can learn to recognize hundreds of objects, with each object containing tens of features. Multi-column networks recognize objects with significantly fewer movements of the sensory receptors. Given the ubiquity of columnar and laminar connectivity patterns throughout the neocortex, we propose that columns and regions have more powerful recognition and modeling capabilities than previously assumed. PMID:29118696

  18. Slow feature analysis: unsupervised learning of invariances.

    PubMed

    Wiskott, Laurenz; Sejnowski, Terrence J

    2002-04-01

    Invariant features of temporally varying signals are useful for analysis and classification. Slow feature analysis (SFA) is a new method for learning invariant or slowly varying features from a vectorial input signal. It is based on a nonlinear expansion of the input signal and application of principal component analysis to this expanded signal and its time derivative. It is guaranteed to find the optimal solution within a family of functions directly and can learn to extract a large number of decorrelated features, which are ordered by their degree of invariance. SFA can be applied hierarchically to process high-dimensional input signals and extract complex features. SFA is applied first to complex cell tuning properties based on simple cell output, including disparity and motion. Then more complicated input-output functions are learned by repeated application of SFA. Finally, a hierarchical network of SFA modules is presented as a simple model of the visual system. The same unstructured network can learn translation, size, rotation, contrast, or, to a lesser degree, illumination invariance for one-dimensional objects, depending on only the training stimulus. Surprisingly, only a few training objects suffice to achieve good generalization to new objects. The generated representation is suitable for object recognition. Performance degrades if the network is trained to learn multiple invariances simultaneously.

  19. A global/local affinity graph for image segmentation.

    PubMed

    Xiaofang Wang; Yuxing Tang; Masnou, Simon; Liming Chen

    2015-04-01

    Construction of a reliable graph capturing perceptual grouping cues of an image is fundamental for graph-cut based image segmentation methods. In this paper, we propose a novel sparse global/local affinity graph over superpixels of an input image to capture both short- and long-range grouping cues, and thereby enabling perceptual grouping laws, including proximity, similarity, continuity, and to enter in action through a suitable graph-cut algorithm. Moreover, we also evaluate three major visual features, namely, color, texture, and shape, for their effectiveness in perceptual segmentation and propose a simple graph fusion scheme to implement some recent findings from psychophysics, which suggest combining these visual features with different emphases for perceptual grouping. In particular, an input image is first oversegmented into superpixels at different scales. We postulate a gravitation law based on empirical observations and divide superpixels adaptively into small-, medium-, and large-sized sets. Global grouping is achieved using medium-sized superpixels through a sparse representation of superpixels' features by solving a ℓ0-minimization problem, and thereby enabling continuity or propagation of local smoothness over long-range connections. Small- and large-sized superpixels are then used to achieve local smoothness through an adjacent graph in a given feature space, and thus implementing perceptual laws, for example, similarity and proximity. Finally, a bipartite graph is also introduced to enable propagation of grouping cues between superpixels of different scales. Extensive experiments are carried out on the Berkeley segmentation database in comparison with several state-of-the-art graph constructions. The results show the effectiveness of the proposed approach, which outperforms state-of-the-art graphs using four different objective criteria, namely, the probabilistic rand index, the variation of information, the global consistency error, and the boundary displacement error.

  20. Pattern Classification Using an Olfactory Model with PCA Feature Selection in Electronic Noses: Study and Application

    PubMed Central

    Fu, Jun; Huang, Canqin; Xing, Jianguo; Zheng, Junbao

    2012-01-01

    Biologically-inspired models and algorithms are considered as promising sensor array signal processing methods for electronic noses. Feature selection is one of the most important issues for developing robust pattern recognition models in machine learning. This paper describes an investigation into the classification performance of a bionic olfactory model with the increase of the dimensions of input feature vector (outer factor) as well as its parallel channels (inner factor). The principal component analysis technique was applied for feature selection and dimension reduction. Two data sets of three classes of wine derived from different cultivars and five classes of green tea derived from five different provinces of China were used for experiments. In the former case the results showed that the average correct classification rate increased as more principal components were put in to feature vector. In the latter case the results showed that sufficient parallel channels should be reserved in the model to avoid pattern space crowding. We concluded that 6∼8 channels of the model with principal component feature vector values of at least 90% cumulative variance is adequate for a classification task of 3∼5 pattern classes considering the trade-off between time consumption and classification rate. PMID:22736979

  1. Learning Spatially-Smooth Mappings in Non-Rigid Structure from Motion

    PubMed Central

    Hamsici, Onur C.; Gotardo, Paulo F.U.; Martinez, Aleix M.

    2013-01-01

    Non-rigid structure from motion (NRSFM) is a classical underconstrained problem in computer vision. A common approach to make NRSFM more tractable is to constrain 3D shape deformation to be smooth over time. This constraint has been used to compress the deformation model and reduce the number of unknowns that are estimated. However, temporal smoothness cannot be enforced when the data lacks temporal ordering and its benefits are less evident when objects undergo abrupt deformations. This paper proposes a new NRSFM method that addresses these problems by considering deformations as spatial variations in shape space and then enforcing spatial, rather than temporal, smoothness. This is done by modeling each 3D shape coefficient as a function of its input 2D shape. This mapping is learned in the feature space of a rotation invariant kernel, where spatial smoothness is intrinsically defined by the mapping function. As a result, our model represents shape variations compactly using custom-built coefficient bases learned from the input data, rather than a pre-specified set such as the Discrete Cosine Transform. The resulting kernel-based mapping is a by-product of the NRSFM solution and leads to another fundamental advantage of our approach: for a newly observed 2D shape, its 3D shape is recovered by simply evaluating the learned function. PMID:23946937

  2. Learning Spatially-Smooth Mappings in Non-Rigid Structure from Motion.

    PubMed

    Hamsici, Onur C; Gotardo, Paulo F U; Martinez, Aleix M

    2012-01-01

    Non-rigid structure from motion (NRSFM) is a classical underconstrained problem in computer vision. A common approach to make NRSFM more tractable is to constrain 3D shape deformation to be smooth over time. This constraint has been used to compress the deformation model and reduce the number of unknowns that are estimated. However, temporal smoothness cannot be enforced when the data lacks temporal ordering and its benefits are less evident when objects undergo abrupt deformations. This paper proposes a new NRSFM method that addresses these problems by considering deformations as spatial variations in shape space and then enforcing spatial, rather than temporal, smoothness. This is done by modeling each 3D shape coefficient as a function of its input 2D shape. This mapping is learned in the feature space of a rotation invariant kernel, where spatial smoothness is intrinsically defined by the mapping function. As a result, our model represents shape variations compactly using custom-built coefficient bases learned from the input data, rather than a pre-specified set such as the Discrete Cosine Transform. The resulting kernel-based mapping is a by-product of the NRSFM solution and leads to another fundamental advantage of our approach: for a newly observed 2D shape, its 3D shape is recovered by simply evaluating the learned function.

  3. Resonator design and performance estimation for a space-based laser transmitter

    NASA Astrophysics Data System (ADS)

    Agrawal, Lalita; Bhardwaj, Atul; Pal, Suranjan; Kamalakar, J. A.

    2006-12-01

    Development of a laser transmitter for space applications is a highly challenging task. The laser must be rugged, reliable, lightweight, compact and energy efficient. Most of these features are inherently achieved by diode pumping of solid state lasers. Overall system reliability can further be improved by appropriate optical design of the laser resonator besides selection of suitable electro-optical and opto-mechanical components. This paper presents the design details and the theoretically estimated performance of a crossed-porro prism based, folded Z-shaped laser resonator. A symmetrically pumped Nd: YAG laser rod of 3 mm diameter and 60 mm length is placed in the gain arm with total input peak power of 1800 W from laser diode arrays. Electro-optical Q-switching is achieved through a combination of a polarizer, a fractional waveplate and LiNbO 3 Q-switch crystal (9 x 9 x 25 mm) placed in the feedback arm. Polarization coupled output is obtained by optimizing azimuth angle of quarter wave plate placed in the gain arm. Theoretical estimation of laser output energy and pulse width has been carried out by varying input power levels and resonator length to analyse the performance tolerances. The designed system is capable of meeting the objective of generating laser pulses of 10 ns duration and 30 mJ energy @ 10 Hz.

  4. Random Testing and Model Checking: Building a Common Framework for Nondeterministic Exploration

    NASA Technical Reports Server (NTRS)

    Groce, Alex; Joshi, Rajeev

    2008-01-01

    Two popular forms of dynamic analysis, random testing and explicit-state software model checking, are perhaps best viewed as search strategies for exploring the state spaces introduced by nondeterminism in program inputs. We present an approach that enables this nondeterminism to be expressed in the SPIN model checker's PROMELA language, and then lets users generate either model checkers or random testers from a single harness for a tested C program. Our approach makes it easy to compare model checking and random testing for models with precisely the same input ranges and probabilities and allows us to mix random testing with model checking's exhaustive exploration of non-determinism. The PROMELA language, as intended in its design, serves as a convenient notation for expressing nondeterminism and mixing random choices with nondeterministic choices. We present and discuss a comparison of random testing and model checking. The results derive from using our framework to test a C program with an effectively infinite state space, a module in JPL's next Mars rover mission. More generally, we show how the ability of the SPIN model checker to call C code can be used to extend SPIN's features, and hope to inspire others to use the same methods to implement dynamic analyses that can make use of efficient state storage, matching, and backtracking.

  5. STARS: A general-purpose finite element computer program for analysis of engineering structures

    NASA Technical Reports Server (NTRS)

    Gupta, K. K.

    1984-01-01

    STARS (Structural Analysis Routines) is primarily an interactive, graphics-oriented, finite-element computer program for analyzing the static, stability, free vibration, and dynamic responses of damped and undamped structures, including rotating systems. The element library consists of one-dimensional (1-D) line elements, two-dimensional (2-D) triangular and quadrilateral shell elements, and three-dimensional (3-D) tetrahedral and hexahedral solid elements. These elements enable the solution of structural problems that include truss, beam, space frame, plane, plate, shell, and solid structures, or any combination thereof. Zero, finite, and interdependent deflection boundary conditions can be implemented by the program. The associated dynamic response analysis capability provides for initial deformation and velocity inputs, whereas the transient excitation may be either forces or accelerations. An effective in-core or out-of-core solution strategy is automatically employed by the program, depending on the size of the problem. Data input may be at random within a data set, and the program offers certain automatic data-generation features. Input data are formatted as an optimal combination of free and fixed formats. Interactive graphics capabilities enable convenient display of nodal deformations, mode shapes, and element stresses.

  6. Optic nerve signals in a neuromorphic chip I: Outer and inner retina models.

    PubMed

    Zaghloul, Kareem A; Boahen, Kwabena

    2004-04-01

    We present a novel model for the mammalian retina and analyze its behavior. Our outer retina model performs bandpass spatiotemporal filtering. It is comprised of two reciprocally connected resistive grids that model the cone and horizontal cell syncytia. We show analytically that its sensitivity is proportional to the space-constant-ratio of the two grids while its half-max response is set by the local average intensity. Thus, this outer retina model realizes luminance adaptation. Our inner retina model performs high-pass temporal filtering. It features slow negative feedback whose strength is modulated by a locally computed measure of temporal contrast, modeling two kinds of amacrine cells, one narrow-field, the other wide-field. We show analytically that, when the input is spectrally pure, the corner-frequency tracks the input frequency. But when the input is broadband, the corner frequency is proportional to contrast. Thus, this inner retina model realizes temporal frequency adaptation as well as contrast gain control. We present CMOS circuit designs for our retina model in this paper as well. Experimental measurements from the fabricated chip, and validation of our analytical results, are presented in the companion paper [Zaghloul and Boahen (2004)].

  7. Face recognition by applying wavelet subband representation and kernel associative memory.

    PubMed

    Zhang, Bai-Ling; Zhang, Haihong; Ge, Shuzhi Sam

    2004-01-01

    In this paper, we propose an efficient face recognition scheme which has two features: 1) representation of face images by two-dimensional (2-D) wavelet subband coefficients and 2) recognition by a modular, personalised classification method based on kernel associative memory models. Compared to PCA projections and low resolution "thumb-nail" image representations, wavelet subband coefficients can efficiently capture substantial facial features while keeping computational complexity low. As there are usually very limited samples, we constructed an associative memory (AM) model for each person and proposed to improve the performance of AM models by kernel methods. Specifically, we first applied kernel transforms to each possible training pair of faces sample and then mapped the high-dimensional feature space back to input space. Our scheme using modular autoassociative memory for face recognition is inspired by the same motivation as using autoencoders for optical character recognition (OCR), for which the advantages has been proven. By associative memory, all the prototypical faces of one particular person are used to reconstruct themselves and the reconstruction error for a probe face image is used to decide if the probe face is from the corresponding person. We carried out extensive experiments on three standard face recognition datasets, the FERET data, the XM2VTS data, and the ORL data. Detailed comparisons with earlier published results are provided and our proposed scheme offers better recognition accuracy on all of the face datasets.

  8. Reconstruction of nonlinear wave propagation

    DOEpatents

    Fleischer, Jason W; Barsi, Christopher; Wan, Wenjie

    2013-04-23

    Disclosed are systems and methods for characterizing a nonlinear propagation environment by numerically propagating a measured output waveform resulting from a known input waveform. The numerical propagation reconstructs the input waveform, and in the process, the nonlinear environment is characterized. In certain embodiments, knowledge of the characterized nonlinear environment facilitates determination of an unknown input based on a measured output. Similarly, knowledge of the characterized nonlinear environment also facilitates formation of a desired output based on a configurable input. In both situations, the input thus characterized and the output thus obtained include features that would normally be lost in linear propagations. Such features can include evanescent waves and peripheral waves, such that an image thus obtained are inherently wide-angle, farfield form of microscopy.

  9. Local wavelet transform: a cost-efficient custom processor for space image compression

    NASA Astrophysics Data System (ADS)

    Masschelein, Bart; Bormans, Jan G.; Lafruit, Gauthier

    2002-11-01

    Thanks to its intrinsic scalability features, the wavelet transform has become increasingly popular as decorrelator in image compression applications. Throuhgput, memory requirements and complexity are important parameters when developing hardware image compression modules. An implementation of the classical, global wavelet transform requires large memory sizes and implies a large latency between the availability of the input image and the production of minimal data entities for entropy coding. Image tiling methods, as proposed by JPEG2000, reduce the memory sizes and the latency, but inevitably introduce image artefacts. The Local Wavelet Transform (LWT), presented in this paper, is a low-complexity wavelet transform architecture using a block-based processing that results in the same transformed images as those obtained by the global wavelet transform. The architecture minimizes the processing latency with a limited amount of memory. Moreover, as the LWT is an instruction-based custom processor, it can be programmed for specific tasks, such as push-broom processing of infinite-length satelite images. The features of the LWT makes it appropriate for use in space image compression, where high throughput, low memory sizes, low complexity, low power and push-broom processing are important requirements.

  10. Pilot study of a novel tool for input-free automated identification of transition zone prostate tumors using T2- and diffusion-weighted signal and textural features.

    PubMed

    Stember, Joseph N; Deng, Fang-Ming; Taneja, Samir S; Rosenkrantz, Andrew B

    2014-08-01

    To present results of a pilot study to develop software that identifies regions suspicious for prostate transition zone (TZ) tumor, free of user input. Eight patients with TZ tumors were used to develop the model by training a Naïve Bayes classifier to detect tumors based on selection of most accurate predictors among various signal and textural features on T2-weighted imaging (T2WI) and apparent diffusion coefficient (ADC) maps. Features tested as inputs were: average signal, signal standard deviation, energy, contrast, correlation, homogeneity and entropy (all defined on T2WI); and average ADC. A forward selection scheme was used on the remaining 20% of training set supervoxels to identify important inputs. The trained model was tested on a different set of ten patients, half with TZ tumors. In training cases, the software tiled the TZ with 4 × 4-voxel "supervoxels," 80% of which were used to train the classifier. Each of 100 iterations selected T2WI energy and average ADC, which therefore were deemed the optimal model input. The two-feature model was applied blindly to the separate set of test patients, again without operator input of suspicious foci. The software correctly predicted presence or absence of TZ tumor in all test patients. Furthermore, locations of predicted tumors corresponded spatially with locations of biopsies that had confirmed their presence. Preliminary findings suggest that this tool has potential to accurately predict TZ tumor presence and location, without operator input. © 2013 Wiley Periodicals, Inc.

  11. Predict subcellular locations of singleplex and multiplex proteins by semi-supervised learning and dimension-reducing general mode of Chou's PseAAC.

    PubMed

    Pacharawongsakda, Eakasit; Theeramunkong, Thanaruk

    2013-12-01

    Predicting protein subcellular location is one of major challenges in Bioinformatics area since such knowledge helps us understand protein functions and enables us to select the targeted proteins during drug discovery process. While many computational techniques have been proposed to improve predictive performance for protein subcellular location, they have several shortcomings. In this work, we propose a method to solve three main issues in such techniques; i) manipulation of multiplex proteins which may exist or move between multiple cellular compartments, ii) handling of high dimensionality in input and output spaces and iii) requirement of sufficient labeled data for model training. Towards these issues, this work presents a new computational method for predicting proteins which have either single or multiple locations. The proposed technique, namely iFLAST-CORE, incorporates the dimensionality reduction in the feature and label spaces with co-training paradigm for semi-supervised multi-label classification. For this purpose, the Singular Value Decomposition (SVD) is applied to transform the high-dimensional feature space and label space into the lower-dimensional spaces. After that, due to limitation of labeled data, the co-training regression makes use of unlabeled data by predicting the target values in the lower-dimensional spaces of unlabeled data. In the last step, the component of SVD is used to project labels in the lower-dimensional space back to those in the original space and an adaptive threshold is used to map a numeric value to a binary value for label determination. A set of experiments on viral proteins and gram-negative bacterial proteins evidence that our proposed method improve the classification performance in terms of various evaluation metrics such as Aiming (or Precision), Coverage (or Recall) and macro F-measure, compared to the traditional method that uses only labeled data.

  12. Research on oral test modeling based on multi-feature fusion

    NASA Astrophysics Data System (ADS)

    Shi, Yuliang; Tao, Yiyue; Lei, Jun

    2018-04-01

    In this paper, the spectrum of speech signal is taken as an input of feature extraction. The advantage of PCNN in image segmentation and other processing is used to process the speech spectrum and extract features. And a new method combining speech signal processing and image processing is explored. At the same time of using the features of the speech map, adding the MFCC to establish the spectral features and integrating them with the features of the spectrogram to further improve the accuracy of the spoken language recognition. Considering that the input features are more complicated and distinguishable, we use Support Vector Machine (SVM) to construct the classifier, and then compare the extracted test voice features with the standard voice features to achieve the spoken standard detection. Experiments show that the method of extracting features from spectrograms using PCNN is feasible, and the fusion of image features and spectral features can improve the detection accuracy.

  13. “Kerrr” black hole: The lord of the string

    NASA Astrophysics Data System (ADS)

    Smailagic, Anais; Spallucci, Euro

    2010-04-01

    Kerrr in the title is not a typo. The third “r” stands for regular, in the sense of pathology-free rotating black hole. We exhibit a long search-for, exact, Kerr-like, solution of the Einstein equations with novel features: (i) no curvature ring singularity; (ii) no “anti-gravity” universe with causality violating time-like closed world-lines; (iii) no “super-luminal” matter disk. The ring singularity is replaced by a classical, circular, rotating string with Planck tension representing the inner engine driving the rotation of all the surrounding matter. The resulting geometry is regular and smoothly interpolates among inner Minkowski space, borderline de Sitter and outer Kerr universe. The key ingredient to cure all unphysical features of the ordinary Kerr black hole is the choice of a “non-commutative geometry inspired” matter source as the input for the Einstein equations, in analogy with spherically symmetric black holes described in earlier works.

  14. Achieving reutilization of scheduling software through abstraction and generalization

    NASA Technical Reports Server (NTRS)

    Wilkinson, George J.; Monteleone, Richard A.; Weinstein, Stuart M.; Mohler, Michael G.; Zoch, David R.; Tong, G. Michael

    1995-01-01

    Reutilization of software is a difficult goal to achieve particularly in complex environments that require advanced software systems. The Request-Oriented Scheduling Engine (ROSE) was developed to create a reusable scheduling system for the diverse scheduling needs of the National Aeronautics and Space Administration (NASA). ROSE is a data-driven scheduler that accepts inputs such as user activities, available resources, timing contraints, and user-defined events, and then produces a conflict-free schedule. To support reutilization, ROSE is designed to be flexible, extensible, and portable. With these design features, applying ROSE to a new scheduling application does not require changing the core scheduling engine, even if the new application requires significantly larger or smaller data sets, customized scheduling algorithms, or software portability. This paper includes a ROSE scheduling system description emphasizing its general-purpose features, reutilization techniques, and tasks for which ROSE reuse provided a low-risk solution with significant cost savings and reduced software development time.

  15. Dynamic gesture recognition using neural networks: a fundament for advanced interaction construction

    NASA Astrophysics Data System (ADS)

    Boehm, Klaus; Broll, Wolfgang; Sokolewicz, Michael A.

    1994-04-01

    Interaction in virtual reality environments is still a challenging task. Static hand posture recognition is currently the most common and widely used method for interaction using glove input devices. In order to improve the naturalness of interaction, and thereby decrease the user-interface learning time, there is a need to be able to recognize dynamic gestures. In this paper we describe our approach to overcoming the difficulties of dynamic gesture recognition (DGR) using neural networks. Backpropagation neural networks have already proven themselves to be appropriate and efficient for posture recognition. However, the extensive amount of data involved in DGR requires a different approach. Because of features such as topology preservation and automatic-learning, Kohonen Feature Maps are particularly suitable for the reduction of the high dimensional data space that is the result of a dynamic gesture, and are thus implemented for this task.

  16. SOFIP: A Short Orbital Flux Integration Program

    NASA Technical Reports Server (NTRS)

    Stassinopoulos, E. G.; Hebert, J. J.; Butler, E. L.; Barth, J. L.

    1979-01-01

    A computer code was developed to evaluate the space radiation environment encountered by geocentric satellites. The Short Orbital Flux Integration Program (SOFIP) is a compact routine of modular compositions, designed mostly with structured programming techniques in order to provide core and time economy and ease of use. The program in its simplest form produces for a given input trajectory a composite integral orbital spectrum of either protons or electrons. Additional features are available separately or in combination with the inclusion of the corresponding (optional) modules. The code is described in detail, and the function and usage of the various modules are explained. A program listing and sample outputs are attached.

  17. Off-line data reduction

    NASA Astrophysics Data System (ADS)

    Gutowski, Marek W.

    1992-12-01

    Presented is a novel, heuristic algorithm, based on fuzzy set theory, allowing for significant off-line data reduction. Given the equidistant data, the algorithm discards some points while retaining others with their original values. The fraction of original data points retained is typically {1}/{6} of the initial value. The reduced data set preserves all the essential features of the input curve. It is possible to reconstruct the original information to high degree of precision by means of natural cubic splines, rational cubic splines or even linear interpolation. Main fields of application should be non-linear data fitting (substantial savings in CPU time) and graphics (storage space savings).

  18. An Efficient Algorithm for Server Thermal Fault Diagnosis Based on Infrared Image

    NASA Astrophysics Data System (ADS)

    Liu, Hang; Xie, Ting; Ran, Jian; Gao, Shan

    2017-10-01

    It is essential for a data center to maintain server security and stability. Long-time overload operation or high room temperature may cause service disruption even a server crash, which would result in great economic loss for business. Currently, the methods to avoid server outages are monitoring and forecasting. Thermal camera can provide fine texture information for monitoring and intelligent thermal management in large data center. This paper presents an efficient method for server thermal fault monitoring and diagnosis based on infrared image. Initially thermal distribution of server is standardized and the interest regions of the image are segmented manually. Then the texture feature, Hu moments feature as well as modified entropy feature are extracted from the segmented regions. These characteristics are applied to analyze and classify thermal faults, and then make efficient energy-saving thermal management decisions such as job migration. For the larger feature space, the principal component analysis is employed to reduce the feature dimensions, and guarantee high processing speed without losing the fault feature information. Finally, different feature vectors are taken as input for SVM training, and do the thermal fault diagnosis after getting the optimized SVM classifier. This method supports suggestions for optimizing data center management, it can improve air conditioning efficiency and reduce the energy consumption of the data center. The experimental results show that the maximum detection accuracy is 81.5%.

  19. Pathway-Specific Striatal Substrates for Habitual Behavior.

    PubMed

    O'Hare, Justin K; Ade, Kristen K; Sukharnikova, Tatyana; Van Hooser, Stephen D; Palmeri, Mark L; Yin, Henry H; Calakos, Nicole

    2016-02-03

    The dorsolateral striatum (DLS) is implicated in habit formation. However, the DLS circuit mechanisms underlying habit remain unclear. A key role for DLS is to transform sensorimotor cortical input into firing of output neurons that project to the mutually antagonistic direct and indirect basal ganglia pathways. Here we examine whether habit alters this input-output function. By imaging cortically evoked firing in large populations of pathway-defined striatal projection neurons (SPNs), we identify features that strongly correlate with habitual behavior on a subject-by-subject basis. Habitual behavior correlated with strengthened DLS output to both pathways as well as a tendency for action-promoting direct pathway SPNs to fire before indirect pathway SPNs. In contrast, habit suppression correlated solely with a weakened direct pathway output. Surprisingly, all effects were broadly distributed in space. Together, these findings indicate that the striatum imposes broad, pathway-specific modulations of incoming activity to render learned motor behaviors habitual. Copyright © 2016 Elsevier Inc. All rights reserved.

  20. Color image segmentation with support vector machines: applications to road signs detection.

    PubMed

    Cyganek, Bogusław

    2008-08-01

    In this paper we propose efficient color segmentation method which is based on the Support Vector Machine classifier operating in a one-class mode. The method has been developed especially for the road signs recognition system, although it can be used in other applications. The main advantage of the proposed method comes from the fact that the segmentation of characteristic colors is performed not in the original but in the higher dimensional feature space. By this a better data encapsulation with a linear hypersphere can be usually achieved. Moreover, the classifier does not try to capture the whole distribution of the input data which is often difficult to achieve. Instead, the characteristic data samples, called support vectors, are selected which allow construction of the tightest hypersphere that encloses majority of the input data. Then classification of a test data simply consists in a measurement of its distance to a centre of the found hypersphere. The experimental results show high accuracy and speed of the proposed method.

  1. Optimal space communications techniques. [all digital phase locked loop for FM demodulation

    NASA Technical Reports Server (NTRS)

    Schilling, D. L.

    1973-01-01

    The design, development, and analysis are reported of a digital phase-locked loop (DPLL) for FM demodulation and threshold extension. One of the features of the developed DPLL is its synchronous, real time operation. The sampling frequency is constant and all the required arithmetic and logic operations are performed within one sampling period, generating an output sequence which is converted to analog form and filtered. An equation relating the sampling frequency to the carrier frequency must be satisfied to guarantee proper DPLL operation. The synchronous operation enables a time-shared operation of one DPLL to demodulate several FM signals simultaneously. In order to obtain information about the DPLL performance at low input signal-to-noise ratios, a model of an input noise spike was introduced, and the DPLL equation was solved using a digital computer. The spike model was successful in finding a second order DPLL which yielded a five db threshold extension beyond that of a first order DPLL.

  2. Haptic over visual information in the distribution of visual attention after tool-use in near and far space.

    PubMed

    Park, George D; Reed, Catherine L

    2015-10-01

    Despite attentional prioritization for grasping space near the hands, tool-use appears to transfer attentional bias to the tool's end/functional part. The contributions of haptic and visual inputs to attentional distribution along a tool were investigated as a function of tool-use in near (Experiment 1) and far (Experiment 2) space. Visual attention was assessed with a 50/50, go/no-go, target discrimination task, while a tool was held next to targets appearing near the tool-occupied hand or tool-end. Target response times (RTs) and sensitivity (d-prime) were measured at target locations, before and after functional tool practice for three conditions: (1) open-tool: tool-end visible (visual + haptic inputs), (2) hidden-tool: tool-end visually obscured (haptic input only), and (3) short-tool: stick missing tool's length/end (control condition: hand occupied but no visual/haptic input). In near space, both open- and hidden-tool groups showed a tool-end, attentional bias (faster RTs toward tool-end) before practice; after practice, RTs near the hand improved. In far space, the open-tool group showed no bias before practice; after practice, target RTs near the tool-end improved. However, the hidden-tool group showed a consistent tool-end bias despite practice. Lack of short-tool group results suggested that hidden-tool group results were specific to haptic inputs. In conclusion, (1) allocation of visual attention along a tool due to tool practice differs in near and far space, and (2) visual attention is drawn toward the tool's end even when visually obscured, suggesting haptic input provides sufficient information for directing attention along the tool.

  3. Description of Selected Algorithms and Implementation Details of a Concept-Demonstration Aircraft VOrtex Spacing System (AVOSS)

    NASA Technical Reports Server (NTRS)

    Hinton, David A.

    2001-01-01

    A ground-based system has been developed to demonstrate the feasibility of automating the process of collecting relevant weather data, predicting wake vortex behavior from a data base of aircraft, prescribing safe wake vortex spacing criteria, estimating system benefit, and comparing predicted and observed wake vortex behavior. This report describes many of the system algorithms, features, limitations, and lessons learned, as well as suggested system improvements. The system has demonstrated concept feasibility and the potential for airport benefit. Significant opportunities exist however for improved system robustness and optimization. A condensed version of the development lab book is provided along with samples of key input and output file types. This report is intended to document the technical development process and system architecture, and to augment archived internal documents that provide detailed descriptions of software and file formats.

  4. Use of an engineering data management system in the analysis of Space Shuttle Orbiter tiles

    NASA Technical Reports Server (NTRS)

    Giles, G. L.; Vallas, M.

    1981-01-01

    This paper demonstrates the use of an engineering data management system to facilitate the extensive stress analyses of the Space Shuttle Orbiter thermal protection system. Descriptions are given of the approach and methods used (1) to gather, organize, and store the data, (2) to query data interactively, (3) to generate graphic displays of the data, and (4) to access, transform, and prepare the data for input to a stress analysis program. The relational information management system was found to be well suited to the tile analysis problem because information related to many separate tiles could be accessed individually from a data base having a natural organization from an engineering viewpoint. The flexible user features of the system facilitated changes in data content and organization which occurred during the development and refinement of the tile analysis procedure. Additionally, the query language supported retrieval of data to satisfy a variety of user-specified conditions.

  5. Dynamic Organization of Hierarchical Memories

    PubMed Central

    Kurikawa, Tomoki; Kaneko, Kunihiko

    2016-01-01

    In the brain, external objects are categorized in a hierarchical way. Although it is widely accepted that objects are represented as static attractors in neural state space, this view does not take account interaction between intrinsic neural dynamics and external input, which is essential to understand how neural system responds to inputs. Indeed, structured spontaneous neural activity without external inputs is known to exist, and its relationship with evoked activities is discussed. Then, how categorical representation is embedded into the spontaneous and evoked activities has to be uncovered. To address this question, we studied bifurcation process with increasing input after hierarchically clustered associative memories are learned. We found a “dynamic categorization”; neural activity without input wanders globally over the state space including all memories. Then with the increase of input strength, diffuse representation of higher category exhibits transitions to focused ones specific to each object. The hierarchy of memories is embedded in the transition probability from one memory to another during the spontaneous dynamics. With increased input strength, neural activity wanders over a narrower state space including a smaller set of memories, showing more specific category or memory corresponding to the applied input. Moreover, such coarse-to-fine transitions are also observed temporally during transient process under constant input, which agrees with experimental findings in the temporal cortex. These results suggest the hierarchy emerging through interaction with an external input underlies hierarchy during transient process, as well as in the spontaneous activity. PMID:27618549

  6. Space market model space industry input-output model

    NASA Technical Reports Server (NTRS)

    Hodgin, Robert F.; Marchesini, Roberto

    1987-01-01

    The goal of the Space Market Model (SMM) is to develop an information resource for the space industry. The SMM is intended to contain information appropriate for decision making in the space industry. The objectives of the SMM are to: (1) assemble information related to the development of the space business; (2) construct an adequate description of the emerging space market; (3) disseminate the information on the space market to forecasts and planners in government agencies and private corporations; and (4) provide timely analyses and forecasts of critical elements of the space market. An Input-Output model of market activity is proposed which are capable of transforming raw data into useful information for decision makers and policy makers dealing with the space sector.

  7. Eye-gaze and intent: Application in 3D interface control

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schryver, J.C.; Goldberg, J.H.

    1993-06-01

    Computer interface control is typically accomplished with an input ``device`` such as keyboard, mouse, trackball, etc. An input device translates a users input actions, such as mouse clicks and key presses, into appropriate computer commands. To control the interface, the user must first convert intent into the syntax of the input device. A more natural means of computer control is possible when the computer can directly infer user intent, without need of intervening input devices. We describe an application of eye-gaze-contingent control of an interactive three-dimensional (3D) user interface. A salient feature of the user interface is natural input, withmore » a heightened impression of controlling the computer directly by the mind. With this interface, input of rotation and translation are intuitive, whereas other abstract features, such as zoom, are more problematic to match with user intent. This paper describes successes with implementation to date, and ongoing efforts to develop a more sophisticated intent inferencing methodology.« less

  8. Eye-gaze and intent: Application in 3D interface control

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schryver, J.C.; Goldberg, J.H.

    1993-01-01

    Computer interface control is typically accomplished with an input device'' such as keyboard, mouse, trackball, etc. An input device translates a users input actions, such as mouse clicks and key presses, into appropriate computer commands. To control the interface, the user must first convert intent into the syntax of the input device. A more natural means of computer control is possible when the computer can directly infer user intent, without need of intervening input devices. We describe an application of eye-gaze-contingent control of an interactive three-dimensional (3D) user interface. A salient feature of the user interface is natural input, withmore » a heightened impression of controlling the computer directly by the mind. With this interface, input of rotation and translation are intuitive, whereas other abstract features, such as zoom, are more problematic to match with user intent. This paper describes successes with implementation to date, and ongoing efforts to develop a more sophisticated intent inferencing methodology.« less

  9. SpaceCube Version 1.5

    NASA Technical Reports Server (NTRS)

    Geist, Alessandro; Lin, Michael; Flatley, Tom; Petrick, David

    2013-01-01

    SpaceCube 1.5 is a high-performance and low-power system in a compact form factor. It is a hybrid processing system consisting of CPU (central processing unit), FPGA (field-programmable gate array), and DSP (digital signal processor) processing elements. The primary processing engine is the Virtex- 5 FX100T FPGA, which has two embedded processors. The SpaceCube 1.5 System was a bridge to the SpaceCube 2.0 and SpaceCube 2.0 Mini processing systems. The SpaceCube 1.5 system was the primary avionics in the successful SMART (Small Rocket/Spacecraft Technology) Sounding Rocket mission that was launched in the summer of 2011. For SMART and similar missions, an avionics processor is required that is reconfigurable, has high processing capability, has multi-gigabit interfaces, is low power, and comes in a rugged/compact form factor. The original SpaceCube 1.0 met a number of the criteria, but did not possess the multi-gigabit interfaces that were required and is a higher-cost system. The SpaceCube 1.5 was designed with those mission requirements in mind. The SpaceCube 1.5 features one Xilinx Virtex-5 FX100T FPGA and has excellent size, weight, and power characteristics [4×4×3 in. (approx. = 10×10×8 cm), 3 lb (approx. = 1.4 kg), and 5 to 15 W depending on the application]. The estimated computing power of the two PowerPC 440s in the Virtex-5 FPGA is 1100 DMIPS each. The SpaceCube 1.5 includes two Gigabit Ethernet (1 Gbps) interfaces as well as two SATA-I/II interfaces (1.5 to 3.0 Gbps) for recording to data drives. The SpaceCube 1.5 also features DDR2 SDRAM (double data rate synchronous dynamic random access memory); 4- Gbit Flash for storing application code for the CPU, FPGA, and DSP processing elements; and a Xilinx Platform Flash XL to store FPGA configuration files or application code. The system also incorporates a 12 bit analog to digital converter with the ability to read 32 discrete analog sensor inputs. The SpaceCube 1.5 design also has a built-in accelerometer. In addition, the system has 12 receive and transmit RS- 422 interfaces for legacy support. The SpaceCube 1.5 processor card represents the first NASA Goddard design in a compact form factor featuring the Xilinx Virtex- 5. The SpaceCube 1.5 incorporates backward compatibility with the Space- Cube 1.0 form factor and stackable architecture. It also makes use of low-cost commercial parts, but is designed for operation in harsh environments.

  10. State-space model with deep learning for functional dynamics estimation in resting-state fMRI.

    PubMed

    Suk, Heung-Il; Wee, Chong-Yaw; Lee, Seong-Whan; Shen, Dinggang

    2016-04-01

    Studies on resting-state functional Magnetic Resonance Imaging (rs-fMRI) have shown that different brain regions still actively interact with each other while a subject is at rest, and such functional interaction is not stationary but changes over time. In terms of a large-scale brain network, in this paper, we focus on time-varying patterns of functional networks, i.e., functional dynamics, inherent in rs-fMRI, which is one of the emerging issues along with the network modelling. Specifically, we propose a novel methodological architecture that combines deep learning and state-space modelling, and apply it to rs-fMRI based Mild Cognitive Impairment (MCI) diagnosis. We first devise a Deep Auto-Encoder (DAE) to discover hierarchical non-linear functional relations among regions, by which we transform the regional features into an embedding space, whose bases are complex functional networks. Given the embedded functional features, we then use a Hidden Markov Model (HMM) to estimate dynamic characteristics of functional networks inherent in rs-fMRI via internal states, which are unobservable but can be inferred from observations statistically. By building a generative model with an HMM, we estimate the likelihood of the input features of rs-fMRI as belonging to the corresponding status, i.e., MCI or normal healthy control, based on which we identify the clinical label of a testing subject. In order to validate the effectiveness of the proposed method, we performed experiments on two different datasets and compared with state-of-the-art methods in the literature. We also analyzed the functional networks learned by DAE, estimated the functional connectivities by decoding hidden states in HMM, and investigated the estimated functional connectivities by means of a graph-theoretic approach. Copyright © 2016 Elsevier Inc. All rights reserved.

  11. State-space model with deep learning for functional dynamics estimation in resting-state fMRI

    PubMed Central

    Suk, Heung-Il; Wee, Chong-Yaw; Lee, Seong-Whan; Shen, Dinggang

    2017-01-01

    Studies on resting-state functional Magnetic Resonance Imaging (rs-fMRI) have shown that different brain regions still actively interact with each other while a subject is at rest, and such functional interaction is not stationary but changes over time. In terms of a large-scale brain network, in this paper, we focus on time-varying patterns of functional networks, i.e., functional dynamics, inherent in rs-fMRI, which is one of the emerging issues along with the network modelling. Specifically, we propose a novel methodological architecture that combines deep learning and state-space modelling, and apply it to rs-fMRI based Mild Cognitive Impairment (MCI) diagnosis. We first devise a Deep Auto-Encoder (DAE) to discover hierarchical non-linear functional relations among regions, by which we transform the regional features into an embedding space, whose bases are complex functional networks. Given the embedded functional features, we then use a Hidden Markov Model (HMM) to estimate dynamic characteristics of functional networks inherent in rs-fMRI via internal states, which are unobservable but can be inferred from observations statistically. By building a generative model with an HMM, we estimate the likelihood of the input features of rs-fMRI as belonging to the corresponding status, i.e., MCI or normal healthy control, based on which we identify the clinical label of a testing subject. In order to validate the effectiveness of the proposed method, we performed experiments on two different datasets and compared with state-of-the-art methods in the literature. We also analyzed the functional networks learned by DAE, estimated the functional connectivities by decoding hidden states in HMM, and investigated the estimated functional connectivities by means of a graph-theoretic approach. PMID:26774612

  12. Intelligent fault diagnosis of rolling bearings using an improved deep recurrent neural network

    NASA Astrophysics Data System (ADS)

    Jiang, Hongkai; Li, Xingqiu; Shao, Haidong; Zhao, Ke

    2018-06-01

    Traditional intelligent fault diagnosis methods for rolling bearings heavily depend on manual feature extraction and feature selection. For this purpose, an intelligent deep learning method, named the improved deep recurrent neural network (DRNN), is proposed in this paper. Firstly, frequency spectrum sequences are used as inputs to reduce the input size and ensure good robustness. Secondly, DRNN is constructed by the stacks of the recurrent hidden layer to automatically extract the features from the input spectrum sequences. Thirdly, an adaptive learning rate is adopted to improve the training performance of the constructed DRNN. The proposed method is verified with experimental rolling bearing data, and the results confirm that the proposed method is more effective than traditional intelligent fault diagnosis methods.

  13. Fast and Exact Fiber Surfaces for Tetrahedral Meshes.

    PubMed

    Klacansky, Pavol; Tierny, Julien; Carr, Hamish; Zhao Geng

    2017-07-01

    Isosurfaces are fundamental geometrical objects for the analysis and visualization of volumetric scalar fields. Recent work has generalized them to bivariate volumetric fields with fiber surfaces, the pre-image of polygons in range space. However, the existing algorithm for their computation is approximate, and is limited to closed polygons. Moreover, its runtime performance does not allow instantaneous updates of the fiber surfaces upon user edits of the polygons. Overall, these limitations prevent a reliable and interactive exploration of the space of fiber surfaces. This paper introduces the first algorithm for the exact computation of fiber surfaces in tetrahedral meshes. It assumes no restriction on the topology of the input polygon, handles degenerate cases and better captures sharp features induced by polygon bends. The algorithm also allows visualization of individual fibers on the output surface, better illustrating their relationship with data features in range space. To enable truly interactive exploration sessions, we further improve the runtime performance of this algorithm. In particular, we show that it is trivially parallelizable and that it scales nearly linearly with the number of cores. Further, we study acceleration data-structures both in geometrical domain and range space and we show how to generalize interval trees used in isosurface extraction to fiber surface extraction. Experiments demonstrate the superiority of our algorithm over previous work, both in terms of accuracy and running time, with up to two orders of magnitude speedups. This improvement enables interactive edits of range polygons with instantaneous updates of the fiber surface for exploration purpose. A VTK-based reference implementation is provided as additional material to reproduce our results.

  14. Spatial and temporal features of superordinate semantic processing studied with fMRI and EEG.

    PubMed

    Costanzo, Michelle E; McArdle, Joseph J; Swett, Bruce; Nechaev, Vladimir; Kemeny, Stefan; Xu, Jiang; Braun, Allen R

    2013-01-01

    The relationships between the anatomical representation of semantic knowledge in the human brain and the timing of neurophysiological mechanisms involved in manipulating such information remain unclear. This is the case for superordinate semantic categorization-the extraction of general features shared by broad classes of exemplars (e.g., living vs. non-living semantic categories). We proposed that, because of the abstract nature of this information, input from diverse input modalities (visual or auditory, lexical or non-lexical) should converge and be processed in the same regions of the brain, at similar time scales during superordinate categorization-specifically in a network of heteromodal regions, and late in the course of the categorization process. In order to test this hypothesis, we utilized electroencephalography and event related potentials (EEG/ERP) with functional magnetic resonance imaging (fMRI) to characterize subjects' responses as they made superordinate categorical decisions (living vs. non-living) about objects presented as visual pictures or auditory words. Our results reveal that, consistent with our hypothesis, during the course of superordinate categorization, information provided by these diverse inputs appears to converge in both time and space: fMRI showed that heteromodal areas of the parietal and temporal cortices are active during categorization of both classes of stimuli. The ERP results suggest that superordinate categorization is reflected as a late positive component (LPC) with a parietal distribution and long latencies for both stimulus types. Within the areas and times in which modality independent responses were identified, some differences between living and non-living categories were observed, with a more widespread spatial extent and longer latency responses for categorization of non-living items.

  15. Spatial and temporal features of superordinate semantic processing studied with fMRI and EEG

    PubMed Central

    Costanzo, Michelle E.; McArdle, Joseph J.; Swett, Bruce; Nechaev, Vladimir; Kemeny, Stefan; Xu, Jiang; Braun, Allen R.

    2013-01-01

    The relationships between the anatomical representation of semantic knowledge in the human brain and the timing of neurophysiological mechanisms involved in manipulating such information remain unclear. This is the case for superordinate semantic categorization—the extraction of general features shared by broad classes of exemplars (e.g., living vs. non-living semantic categories). We proposed that, because of the abstract nature of this information, input from diverse input modalities (visual or auditory, lexical or non-lexical) should converge and be processed in the same regions of the brain, at similar time scales during superordinate categorization—specifically in a network of heteromodal regions, and late in the course of the categorization process. In order to test this hypothesis, we utilized electroencephalography and event related potentials (EEG/ERP) with functional magnetic resonance imaging (fMRI) to characterize subjects' responses as they made superordinate categorical decisions (living vs. non-living) about objects presented as visual pictures or auditory words. Our results reveal that, consistent with our hypothesis, during the course of superordinate categorization, information provided by these diverse inputs appears to converge in both time and space: fMRI showed that heteromodal areas of the parietal and temporal cortices are active during categorization of both classes of stimuli. The ERP results suggest that superordinate categorization is reflected as a late positive component (LPC) with a parietal distribution and long latencies for both stimulus types. Within the areas and times in which modality independent responses were identified, some differences between living and non-living categories were observed, with a more widespread spatial extent and longer latency responses for categorization of non-living items. PMID:23847490

  16. Preliminary design for a reverse Brayton cycle cryogenic cooler

    NASA Technical Reports Server (NTRS)

    Swift, Walter L.

    1993-01-01

    A long life, single stage, reverse Brayton cycle cryogenic cooler is being developed for applications in space. The system is designed to provide 5 W of cooling at a temperature of 65 Kelvin with a total cycle input power of less than 200 watts. Key features of the approach include high speed, miniature turbomachines; an all metal, high performance, compact heat exchanger; and a simple, high frequency, three phase motor drive. In Phase 1, a preliminary design of the system was performed. Analyses and trade studies were used to establish the thermodynamic performance of the system and the performance specifications for individual components. Key mechanical features for components were defined and assembly layouts for the components and the system were prepared. Critical materials and processes were identified. Component and brassboard system level tests were conducted at cryogenic temperatures. The system met the cooling requirement of 5 W at 65 K. The system was also operated over a range of cooling loads from 0.5 W at 37 K to 10 W at 65 K. Input power to the system was higher than target values. The heat exchanger and inverter met or exceeded their respective performance targets. The compresssor/motor assembly was marginally below its performance target. The turboexpander met its aerodynamic efficiency target, but overall performance was below target because of excessive heat leak. The heat leak will be reduced to an acceptable value in the engineering model. The results of Phase 1 indicate that the 200 watt input power requirement can be met with state-of-the-art technology in a system which has very flexible integration requirements and negligible vibration levels.

  17. Preliminary design for a reverse Brayton cycle cryogenic cooler

    NASA Astrophysics Data System (ADS)

    Swift, Walter L.

    1993-12-01

    A long life, single stage, reverse Brayton cycle cryogenic cooler is being developed for applications in space. The system is designed to provide 5 W of cooling at a temperature of 65 Kelvin with a total cycle input power of less than 200 watts. Key features of the approach include high speed, miniature turbomachines; an all metal, high performance, compact heat exchanger; and a simple, high frequency, three phase motor drive. In Phase 1, a preliminary design of the system was performed. Analyses and trade studies were used to establish the thermodynamic performance of the system and the performance specifications for individual components. Key mechanical features for components were defined and assembly layouts for the components and the system were prepared. Critical materials and processes were identified. Component and brassboard system level tests were conducted at cryogenic temperatures. The system met the cooling requirement of 5 W at 65 K. The system was also operated over a range of cooling loads from 0.5 W at 37 K to 10 W at 65 K. Input power to the system was higher than target values. The heat exchanger and inverter met or exceeded their respective performance targets. The compresssor/motor assembly was marginally below its performance target. The turboexpander met its aerodynamic efficiency target, but overall performance was below target because of excessive heat leak. The heat leak will be reduced to an acceptable value in the engineering model. The results of Phase 1 indicate that the 200 watt input power requirement can be met with state-of-the-art technology in a system which has very flexible integration requirements and negligible vibration levels.

  18. Flexible feature-space-construction architecture and its VLSI implementation for multi-scale object detection

    NASA Astrophysics Data System (ADS)

    Luo, Aiwen; An, Fengwei; Zhang, Xiangyu; Chen, Lei; Huang, Zunkai; Jürgen Mattausch, Hans

    2018-04-01

    Feature extraction techniques are a cornerstone of object detection in computer-vision-based applications. The detection performance of vison-based detection systems is often degraded by, e.g., changes in the illumination intensity of the light source, foreground-background contrast variations or automatic gain control from the camera. In order to avoid such degradation effects, we present a block-based L1-norm-circuit architecture which is configurable for different image-cell sizes, cell-based feature descriptors and image resolutions according to customization parameters from the circuit input. The incorporated flexibility in both the image resolution and the cell size for multi-scale image pyramids leads to lower computational complexity and power consumption. Additionally, an object-detection prototype for performance evaluation in 65 nm CMOS implements the proposed L1-norm circuit together with a histogram of oriented gradients (HOG) descriptor and a support vector machine (SVM) classifier. The proposed parallel architecture with high hardware efficiency enables real-time processing, high detection robustness, small chip-core area as well as low power consumption for multi-scale object detection.

  19. Overview and Results of ISS Space Medicine Operations Team (SMOT) Activities

    NASA Technical Reports Server (NTRS)

    Johnson, H. Magee; Sargsyan, Ashot E.; Armstrong, Cheryl; McDonald, P. Vernon; Duncan, James M.; Bogomolov, V. V.

    2007-01-01

    The Space Medicine Operations Team (SMOT) was created to integrate International Space Station (ISS) Medical Operations, promote awareness of all Partners, provide emergency response capability and management, provide operational input from all Partners for medically relevant concerns, and provide a source of medical input to ISS Mission Management. The viewgraph presentation provides an overview of educational objectives, purpose, operations, products, statistics, and its use in off-nominal situations.

  20. Microwave Frequency Multiplier

    NASA Astrophysics Data System (ADS)

    Velazco, J. E.

    2017-02-01

    High-power microwave radiation is used in the Deep Space Network (DSN) and Goldstone Solar System Radar (GSSR) for uplink communications with spacecraft and for monitoring asteroids and space debris, respectively. Intense X-band (7.1 to 8.6 GHz) microwave signals are produced for these applications via klystron and traveling-wave microwave vacuum tubes. In order to achieve higher data rate communications with spacecraft, the DSN is planning to gradually furnish several of its deep space stations with uplink systems that employ Ka-band (34-GHz) radiation. Also, the next generation of planetary radar, such as Ka-Band Objects Observation and Monitoring (KaBOOM), is considering frequencies in the Ka-band range (34 to 36 GHz) in order to achieve higher target resolution. Current commercial Ka-band sources are limited to power levels that range from hundreds of watts up to a kilowatt and, at the high-power end, tend to suffer from poor reliability. In either case, there is a clear need for stable Ka-band sources that can produce kilowatts of power with high reliability. In this article, we present a new concept for high-power, high-frequency generation (including Ka-band) that we refer to as the microwave frequency multiplier (MFM). The MFM is a two-cavity vacuum tube concept where low-frequency (2 to 8 GHz) power is fed into the input cavity to modulate and accelerate an electron beam. In the second cavity, the modulated electron beam excites and amplifies high-power microwaves at a frequency that is a multiple integer of the input cavity's frequency. Frequency multiplication factors in the 4 to 10 range are being considered for the current application, although higher multiplication factors are feasible. This novel beam-wave interaction allows the MFM to produce high-power, high-frequency radiation with high efficiency. A key feature of the MFM is that it uses significantly larger cavities than its klystron counterparts, thus greatly reducing power density and arcing concerns. We present a theoretical analysis for the beam-wave interactions in the MFM's input and output cavities. We show the conditions required for successful frequency multiplication inside the output cavity. Computer simulations using the plasma physics code MAGIC show that 100 kW of Ka-band (32-GHz) output power can be produced using an 80-kW X-band (8-GHz) signal at the MFM's input. The associated MFM efficiency - from beam power to Ka-band power - is 83 percent. Thus, the overall klystron-MFM efficiency is 42 percent - assuming that a klystron with an efficiency of 50 percent delivers the input signal.

  1. Neural network analysis for geological interpretation of tomographic images beneath the Japan Islands

    NASA Astrophysics Data System (ADS)

    Kuwatani, T.; Toriumi, M.

    2009-12-01

    Recent advances in methodologies of geophysical observations, such as seismic tomography, seismic reflection method and geomagnetic method, provide us a large amount and a wide variety of data for physical properties of a crust and upper mantle (e.g. Matsubara et al. (2008)). However, it has still been difficult to specify a rock type and its physical conditions, mainly because (1) available data usually have a lot of error and uncertainty, and (2) physical properties of rocks are greatly affected by fluid and microstructures. The objective interpretation and quantitative evaluation for lithology and fluid-related structure require the statistical analyses of integrated geophysical and geological data. Self-Organizing Maps (SOMs) are unsupervised artificial neural networks that map the input space into clusters in a topological form whose organization is related to trends in the input data (Kohonen 2001). SOMs are powerful neural network techniques to classify and interpret multiattribute data sets. Results of SOM classifications can be represented as 2D images, called feature maps which illustrate the complexity and interrelationships among input data sets. Recently, some works have used SOM in order to interpret multidimensional, non-linear, and highly noised geophysical data for purposes of geological prediction (e.g. Klose 2006; Tselentis et al. 2007; Bauer et al. 2008). This paper describes the application of SOM to the 3D velocity structure beneath the whole Japan islands (e.g. Matsubara et al. 2008). From the obtained feature maps, we can specify the lithology and qualitatively evaluate the effect of fluid-related structures. Moreover, re-projection of feature maps onto the 3D velocity structures resulted in detailed images of the structures within the plates. The Pacific plate and the Philippine Sea plate subducting beneath the Eurasian plate can be imaged more clearly than the original P- and S-wave velocity structures. In order to understand more precise prediction of lithology and its structure, we will use the additional input data sets, such as tomographic images of random velocity fluctuation (Takahashi et al. 2009) and b-value mapping data. Additionally, different kinds of data sets, including the experimental and petrological results (e.g. Christensen 1991; Hacker et al. 2003) can be applied to our analyses.

  2. Feature maps driven no-reference image quality prediction of authentically distorted images

    NASA Astrophysics Data System (ADS)

    Ghadiyaram, Deepti; Bovik, Alan C.

    2015-03-01

    Current blind image quality prediction models rely on benchmark databases comprised of singly and synthetically distorted images, thereby learning image features that are only adequate to predict human perceived visual quality on such inauthentic distortions. However, real world images often contain complex mixtures of multiple distortions. Rather than a) discounting the effect of these mixtures of distortions on an image's perceptual quality and considering only the dominant distortion or b) using features that are only proven to be efficient for singly distorted images, we deeply study the natural scene statistics of authentically distorted images, in different color spaces and transform domains. We propose a feature-maps-driven statistical approach which avoids any latent assumptions about the type of distortion(s) contained in an image, and focuses instead on modeling the remarkable consistencies in the scene statistics of real world images in the absence of distortions. We design a deep belief network that takes model-based statistical image features derived from a very large database of authentically distorted images as input and discovers good feature representations by generalizing over different distortion types, mixtures, and severities, which are later used to learn a regressor for quality prediction. We demonstrate the remarkable competence of our features for improving automatic perceptual quality prediction on a benchmark database and on the newly designed LIVE Authentic Image Quality Challenge Database and show that our approach of combining robust statistical features and the deep belief network dramatically outperforms the state-of-the-art.

  3. Visual Odometry Based on Structural Matching of Local Invariant Features Using Stereo Camera Sensor

    PubMed Central

    Núñez, Pedro; Vázquez-Martín, Ricardo; Bandera, Antonio

    2011-01-01

    This paper describes a novel sensor system to estimate the motion of a stereo camera. Local invariant image features are matched between pairs of frames and linked into image trajectories at video rate, providing the so-called visual odometry, i.e., motion estimates from visual input alone. Our proposal conducts two matching sessions: the first one between sets of features associated to the images of the stereo pairs and the second one between sets of features associated to consecutive frames. With respect to previously proposed approaches, the main novelty of this proposal is that both matching algorithms are conducted by means of a fast matching algorithm which combines absolute and relative feature constraints. Finding the largest-valued set of mutually consistent matches is equivalent to finding the maximum-weighted clique on a graph. The stereo matching allows to represent the scene view as a graph which emerge from the features of the accepted clique. On the other hand, the frame-to-frame matching defines a graph whose vertices are features in 3D space. The efficiency of the approach is increased by minimizing the geometric and algebraic errors to estimate the final displacement of the stereo camera between consecutive acquired frames. The proposed approach has been tested for mobile robotics navigation purposes in real environments and using different features. Experimental results demonstrate the performance of the proposal, which could be applied in both industrial and service robot fields. PMID:22164016

  4. Genetic algorithm based input selection for a neural network function approximator with applications to SSME health monitoring

    NASA Technical Reports Server (NTRS)

    Peck, Charles C.; Dhawan, Atam P.; Meyer, Claudia M.

    1991-01-01

    A genetic algorithm is used to select the inputs to a neural network function approximator. In the application considered, modeling critical parameters of the space shuttle main engine (SSME), the functional relationship between measured parameters is unknown and complex. Furthermore, the number of possible input parameters is quite large. Many approaches have been used for input selection, but they are either subjective or do not consider the complex multivariate relationships between parameters. Due to the optimization and space searching capabilities of genetic algorithms they were employed to systematize the input selection process. The results suggest that the genetic algorithm can generate parameter lists of high quality without the explicit use of problem domain knowledge. Suggestions for improving the performance of the input selection process are also provided.

  5. NASGRO 3.0: A Software for Analyzing Aging Aircraft

    NASA Technical Reports Server (NTRS)

    Mettu, S. R.; Shivakumar, V.; Beek, J. M.; Yeh, F.; Williams, L. C.; Forman, R. G.; McMahon, J. J.; Newman, J. C., Jr.

    1999-01-01

    Structural integrity analysis of aging aircraft is a critical necessity in view of the increasing numbers of such aircraft in general aviation, the airlines and the military. Efforts are in progress by NASA, the FAA and the DoD to focus attention on aging aircraft safety. The present paper describes the NASGRO software which is well-suited for effectively analyzing the behavior of defects that may be found in aging aircraft. The newly revised Version 3.0 has many features specifically implemented to suit the needs of the aircraft community. The fatigue crack growth computer program NASA/FLAGRO 2.0 was originally developed to analyze space hardware such as the Space Shuttle, the International Space Station and the associated payloads. Due to popular demand, the software was enhanced to suit the needs of the aircraft industry. Major improvements in Version 3.0 are the incorporation of the ability to read aircraft spectra of unlimited size, generation of common aircraft fatigue load blocks, and the incorporation of crack-growth models which include load-interaction effects such as retardation due to overloads and acceleration due to underloads. Five new crack-growth models, viz., generalized Willenborg, modified generalized Willenborg, constant closure model, Walker-Chang model and the deKoning-Newman strip-yield model, have been implemented. To facilitate easier input of geometry, material properties and load spectra, a Windows-style graphical user interface has been developed. Features to quickly change the input and rerun the problem as well as examine the output are incorporated. NASGRO has been organized into three modules, the crack-growth module being the primary one. The other two modules are the boundary element module and the material properties module. The boundary-element module provides the ability to model and analyze complex two-dimensional problems to obtain stresses and stress-intensity factors. The material properties module allows users to store and curve-fit fatigue-crack growth data. On-line help and documentation are provided for each of the modules. In addition to the popular PC windows version, a unix-based X-windows version of NASGRO is also available. A portable C++ class library called WxWindows was used to facilitate cross-platform availability of the software.

  6. Artificial Neural Network Based Fault Diagnostics of Rolling Element Bearings Using Time-Domain Features

    NASA Astrophysics Data System (ADS)

    Samanta, B.; Al-Balushi, K. R.

    2003-03-01

    A procedure is presented for fault diagnosis of rolling element bearings through artificial neural network (ANN). The characteristic features of time-domain vibration signals of the rotating machinery with normal and defective bearings have been used as inputs to the ANN consisting of input, hidden and output layers. The features are obtained from direct processing of the signal segments using very simple preprocessing. The input layer consists of five nodes, one each for root mean square, variance, skewness, kurtosis and normalised sixth central moment of the time-domain vibration signals. The inputs are normalised in the range of 0.0 and 1.0 except for the skewness which is normalised between -1.0 and 1.0. The output layer consists of two binary nodes indicating the status of the machine—normal or defective bearings. Two hidden layers with different number of neurons have been used. The ANN is trained using backpropagation algorithm with a subset of the experimental data for known machine conditions. The ANN is tested using the remaining set of data. The effects of some preprocessing techniques like high-pass, band-pass filtration, envelope detection (demodulation) and wavelet transform of the vibration signals, prior to feature extraction, are also studied. The results show the effectiveness of the ANN in diagnosis of the machine condition. The proposed procedure requires only a few features extracted from the measured vibration data either directly or with simple preprocessing. The reduced number of inputs leads to faster training requiring far less iterations making the procedure suitable for on-line condition monitoring and diagnostics of machines.

  7. Phonological Feature Re-Assembly and the Importance of Phonetic Cues

    ERIC Educational Resources Information Center

    Archibald, John

    2009-01-01

    It is argued that new phonological features can be acquired in second languages, but that both feature acquisition and feature re-assembly are affected by the robustness of phonetic cues in the input.

  8. Manchester visual query language

    NASA Astrophysics Data System (ADS)

    Oakley, John P.; Davis, Darryl N.; Shann, Richard T.

    1993-04-01

    We report a database language for visual retrieval which allows queries on image feature information which has been computed and stored along with images. The language is novel in that it provides facilities for dealing with feature data which has actually been obtained from image analysis. Each line in the Manchester Visual Query Language (MVQL) takes a set of objects as input and produces another, usually smaller, set as output. The MVQL constructs are mainly based on proven operators from the field of digital image analysis. An example is the Hough-group operator which takes as input a specification for the objects to be grouped, a specification for the relevant Hough space, and a definition of the voting rule. The output is a ranked list of high scoring bins. The query could be directed towards one particular image or an entire image database, in the latter case the bins in the output list would in general be associated with different images. We have implemented MVQL in two layers. The command interpreter is a Lisp program which maps each MVQL line to a sequence of commands which are used to control a specialized database engine. The latter is a hybrid graph/relational system which provides low-level support for inheritance and schema evolution. In the paper we outline the language and provide examples of useful queries. We also describe our solution to the engineering problems associated with the implementation of MVQL.

  9. The water quality of the LOCAR Pang and Lambourn catchments

    NASA Astrophysics Data System (ADS)

    Neal, C.; Jarvie, H. P.; Wade, A. J.; Neal, M.; Wyatt, R.; Wickham, H.; Hill, L.; Hewitt, N.

    The water quality of the Pang and Lambourn, tributaries of the River Thames, in south-eastern England, is described in relation to spatial and temporal dimensions. The river waters are supplied mainly from Chalk-fed aquifer sources and are, therefore, of a calcium-bicarbonate type. The major, minor and trace element chemistry of the rivers is controlled by a combination of atmospheric and pollutant inputs from agriculture and sewage sources superimposed on a background water quality signal linked to geological sources. Water quality does not vary greatly over time or space. However, in detail, there are differences in water quality between the Pang and Lambourn and between sites along the Pang and the Lambourn. These differences reflect hydrological processes, water flow pathways and water quality input fluxes. The Pang’s pattern of water quality change is more variable than that of the Lambourn. The flow hydrograph also shows both a cyclical and "uniform pattern" characteristic of aquifer drainage with, superimposed, a series of "flashier" spiked responses characteristic of karstic systems. The Lambourn, in contrast, shows simpler features without the "flashier" responses. The results are discussed in relation to the newly developed UK community programme LOCAR dealing with Lowland Catchment Research. A descriptive and box model structure is provided to describe the key features of water quality variations in relation to soil, unsaturated and groundwater flows and storage both away from and close to the river.

  10. almaBTE : A solver of the space-time dependent Boltzmann transport equation for phonons in structured materials

    NASA Astrophysics Data System (ADS)

    Carrete, Jesús; Vermeersch, Bjorn; Katre, Ankita; van Roekeghem, Ambroise; Wang, Tao; Madsen, Georg K. H.; Mingo, Natalio

    2017-11-01

    almaBTE is a software package that solves the space- and time-dependent Boltzmann transport equation for phonons, using only ab-initio calculated quantities as inputs. The program can predictively tackle phonon transport in bulk crystals and alloys, thin films, superlattices, and multiscale structures with size features in the nm- μm range. Among many other quantities, the program can output thermal conductances and effective thermal conductivities, space-resolved average temperature profiles, and heat-current distributions resolved in frequency and space. Its first-principles character makes almaBTE especially well suited to investigate novel materials and structures. This article gives an overview of the program structure and presents illustrative examples for some of its uses. PROGRAM SUMMARY Program Title:almaBTE Program Files doi:http://dx.doi.org/10.17632/8tfzwgtp73.1 Licensing provisions: Apache License, version 2.0 Programming language: C++ External routines/libraries: BOOST, MPI, Eigen, HDF5, spglib Nature of problem: Calculation of temperature profiles, thermal flux distributions and effective thermal conductivities in structured systems where heat is carried by phonons Solution method: Solution of linearized phonon Boltzmann transport equation, Variance-reduced Monte Carlo

  11. Evidence for Working Memory Storage Operations in Perceptual Cortex

    PubMed Central

    Sreenivasan, Kartik K.; Gratton, Caterina; Vytlacil, Jason; D’Esposito, Mark

    2014-01-01

    Isolating the short-term storage component of working memory (WM) from the myriad of associated executive processes has been an enduring challenge. Recent efforts have identified patterns of activity in visual regions that contain information about items being held in WM. However, it remains unclear (i) whether these representations withstand intervening sensory input and (ii) how communication between multimodal association cortex and unimodal perceptual regions supporting WM representations is involved in WM storage. We present evidence that the features of a face held in WM are stored within face processing regions, that these representations persist across subsequent sensory input, and that information about the match between sensory input and memory representation is relayed forward from perceptual to prefrontal regions. Participants were presented with a series of probe faces and indicated whether each probe matched a Target face held in WM. We parametrically varied the feature similarity between probe and Target faces. Activity within face processing regions scaled linearly with the degree of feature similarity between the probe face and the features of the Target face, suggesting that the features of the Target face were stored in these regions. Furthermore, directed connectivity measures revealed that the direction of information flow that was optimal for performance was from sensory regions that stored the features of the Target face to dorsal prefrontal regions, supporting the notion that sensory input is compared to representations stored within perceptual regions and relayed forward. Together, these findings indicate that WM storage operations are carried out within perceptual cortex. PMID:24436009

  12. Software for Engineering Simulations of a Spacecraft

    NASA Technical Reports Server (NTRS)

    Shireman, Kirk; McSwain, Gene; McCormick, Bernell; Fardelos, Panayiotis

    2005-01-01

    Spacecraft Engineering Simulation II (SES II) is a C-language computer program for simulating diverse aspects of operation of a spacecraft characterized by either three or six degrees of freedom. A functional model in SES can include a trajectory flight plan; a submodel of a flight computer running navigational and flight-control software; and submodels of the environment, the dynamics of the spacecraft, and sensor inputs and outputs. SES II features a modular, object-oriented programming style. SES II supports event-based simulations, which, in turn, create an easily adaptable simulation environment in which many different types of trajectories can be simulated by use of the same software. The simulation output consists largely of flight data. SES II can be used to perform optimization and Monte Carlo dispersion simulations. It can also be used to perform simulations for multiple spacecraft. In addition to its generic simulation capabilities, SES offers special capabilities for space-shuttle simulations: for this purpose, it incorporates submodels of the space-shuttle dynamics and a C-language version of the guidance, navigation, and control components of the space-shuttle flight software.

  13. Space station integrated wall design and penetration damage control

    NASA Technical Reports Server (NTRS)

    Coronado, A. R.; Gibbins, M. N.; Wright, M. A.; Stern, P. H.

    1987-01-01

    The analysis code BUMPER executes a numerical solution to the problem of calculating the probability of no penetration (PNP) of a spacecraft subject to man-made orbital debris or meteoroid impact. The codes were developed on a DEC VAX 11/780 computer that uses the Virtual Memory System (VMS) operating system, which is written in FORTRAN 77 with no VAX extensions. To help illustrate the steps involved, a single sample analysis is performed. The example used is the space station reference configuration. The finite element model (FEM) of this configuration is relatively complex but demonstrates many BUMPER features. The computer tools and guidelines are described for constructing a FEM for the space station under consideration. The methods used to analyze the sensitivity of PNP to variations in design, are described. Ways are suggested for developing contour plots of the sensitivity study data. Additional BUMPER analysis examples are provided, including FEMs, command inputs, and data outputs. The mathematical theory used as the basis for the code is described, and illustrates the data flow within the analysis.

  14. Classification of crystal structure using a convolutional neural network

    PubMed Central

    Park, Woon Bae; Chung, Jiyong; Sohn, Keemin; Pyo, Myoungho

    2017-01-01

    A deep machine-learning technique based on a convolutional neural network (CNN) is introduced. It has been used for the classification of powder X-ray diffraction (XRD) patterns in terms of crystal system, extinction group and space group. About 150 000 powder XRD patterns were collected and used as input for the CNN with no handcrafted engineering involved, and thereby an appropriate CNN architecture was obtained that allowed determination of the crystal system, extinction group and space group. In sharp contrast with the traditional use of powder XRD pattern analysis, the CNN never treats powder XRD patterns as a deconvoluted and discrete peak position or as intensity data, but instead the XRD patterns are regarded as nothing but a pattern similar to a picture. The CNN interprets features that humans cannot recognize in a powder XRD pattern. As a result, accuracy levels of 81.14, 83.83 and 94.99% were achieved for the space-group, extinction-group and crystal-system classifications, respectively. The well trained CNN was then used for symmetry identification of unknown novel inorganic compounds. PMID:28875035

  15. Simple Deterministically Constructed Recurrent Neural Networks

    NASA Astrophysics Data System (ADS)

    Rodan, Ali; Tiňo, Peter

    A large number of models for time series processing, forecasting or modeling follows a state-space formulation. Models in the specific class of state-space approaches, referred to as Reservoir Computing, fix their state-transition function. The state space with the associated state transition structure forms a reservoir, which is supposed to be sufficiently complex so as to capture a large number of features of the input stream that can be potentially exploited by the reservoir-to-output readout mapping. The largely "black box" character of reservoirs prevents us from performing a deeper theoretical investigation of the dynamical properties of successful reservoirs. Reservoir construction is largely driven by a series of (more-or-less) ad-hoc randomized model building stages, with both the researchers and practitioners having to rely on a series of trials and errors. We show that a very simple deterministically constructed reservoir with simple cycle topology gives performances comparable to those of the Echo State Network (ESN) on a number of time series benchmarks. Moreover, we argue that the memory capacity of such a model can be made arbitrarily close to the proved theoretical limit.

  16. Classification of crystal structure using a convolutional neural network.

    PubMed

    Park, Woon Bae; Chung, Jiyong; Jung, Jaeyoung; Sohn, Keemin; Singh, Satendra Pal; Pyo, Myoungho; Shin, Namsoo; Sohn, Kee-Sun

    2017-07-01

    A deep machine-learning technique based on a convolutional neural network (CNN) is introduced. It has been used for the classification of powder X-ray diffraction (XRD) patterns in terms of crystal system, extinction group and space group. About 150 000 powder XRD patterns were collected and used as input for the CNN with no handcrafted engineering involved, and thereby an appropriate CNN architecture was obtained that allowed determination of the crystal system, extinction group and space group. In sharp contrast with the traditional use of powder XRD pattern analysis, the CNN never treats powder XRD patterns as a deconvoluted and discrete peak position or as intensity data, but instead the XRD patterns are regarded as nothing but a pattern similar to a picture. The CNN interprets features that humans cannot recognize in a powder XRD pattern. As a result, accuracy levels of 81.14, 83.83 and 94.99% were achieved for the space-group, extinction-group and crystal-system classifications, respectively. The well trained CNN was then used for symmetry identification of unknown novel inorganic compounds.

  17. A 1050 K Stirling space engine design

    NASA Technical Reports Server (NTRS)

    Penswick, L. Barry

    1988-01-01

    As part of the NASA CSTI High Capacity Power Program on Conversion Systems for Nuclear Applications, Sunpower, Inc. completed for NASA Lewis a reference design of a single-cylinder free-piston Stirling engine that is optimized for the lifetimes and temperatures appropriate for space applications. The NASA effort is part of the overall SP-100 program which is a combined DOD/DOE/NASA project to develop nuclear power for space. Stirling engines have been identified as a growth option for SP-100 offering increased power output and lower system mass and radiator area. Superalloy materials are used in the 1050 K hot end of the engine; the engine temperature ratio is 2.0. The engine design features simplified heat exchangers with heat input by sodium heat pipes, hydrodynamic gas bearings, a permanent magnet linear alternator, and a dynamic balance system. The design shows an efficiency (including the alternator) of 29 percent and a specific mass of 5.7 kg/kW. This design also represents a significant step toward the 1300 K refractory Stirling engine which is another growth option of SP-100.

  18. Apparatus for Controlling Low Power Voltages in Space Based Processing Systems

    NASA Technical Reports Server (NTRS)

    Petrick, David J. (Inventor)

    2017-01-01

    A low power voltage control circuit for use in space missions includes a switching device coupled between an input voltage and an output voltage. The switching device includes a control input coupled to an enable signal, wherein the control input is configured to selectively turn the output voltage on or off based at least in part on the enable signal. A current monitoring circuit is coupled to the output voltage and configured to produce a trip signal, wherein the trip signal is active when a load current flowing through the switching device is determined to exceed a predetermined threshold and is inactive otherwise. The power voltage control circuit is constructed of space qualified components.

  19. OAST Space Theme Workshop. Volume 3: Working group summary. 6: Power (P-2). A. Statement. B. Technology needs (form 1). C. Priority assessment (form 2)

    NASA Technical Reports Server (NTRS)

    1976-01-01

    Power requirements for the multipurpose space power platform, for space industrialization, SETI, the solar system exploration facility, and for global services are assessed for various launch dates. Priorities and initiatives for the development of elements of space power systems are described for systems using light power input (solar energy source) or thermal power input, (solar, chemical, nuclear, radioisotopes, reactors). Systems for power conversion, power processing, distribution and control are likewise examined.

  20. Nonlinear features for product inspection

    NASA Astrophysics Data System (ADS)

    Talukder, Ashit; Casasent, David P.

    1999-03-01

    Classification of real-time X-ray images of randomly oriented touching pistachio nuts is discussed. The ultimate objective is the development of a system for automated non-invasive detection of defective product items on a conveyor belt. We discuss the extraction of new features that allow better discrimination between damaged and clean items (pistachio nuts). This feature extraction and classification stage is the new aspect of this paper; our new maximum representation and discriminating feature (MRDF) extraction method computes nonlinear features that are used as inputs to a new modified k nearest neighbor classifier. In this work, the MRDF is applied to standard features (rather than iconic data). The MRDF is robust to various probability distributions of the input class and is shown to provide good classification and new ROC (receiver operating characteristic) data.

  1. SOLDESIGN user's manual copyright

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pillsbury, R.D. Jr.

    1991-02-01

    SOLDESIGN is a general purpose program for calculating and plotting magnetic fields, Lorentz body forces, resistances and inductances for a system of coaxial uniform current density solenoidal elements. The program was originally written in 1980 and has been evolving ever since. SOLDESIGN can be used with either interactive (terminal) or file input. Output can be to the terminal or to a file. All input is free-field with comma or space separators. SOLDESIGN contains an interactive help feature that allows the user to examine documentation while executing the program. Input to the program consists of a sequence of word commands andmore » numeric data. Initially, the geometry of the elements or coils is defined by specifying either the coordinates of one corner of the coil or the coil centroid, a symmetry parameter to allow certain reflections of the coil (e.g., a split pair), the radial and axial builds, and either the overall current density or the total ampere-turns (NI). A more general quadrilateral element is also available. If inductances or resistances are desired, the number of turns must be specified. Field, force, and inductance calculations also require the number of radial current sheets (or integration points). Work is underway to extend the field, force, and, possibly, inductances to non-coaxial solenoidal elements.« less

  2. Frequency-tuning input-shaped manifold-based switching control for underactuated space robot equipped with flexible appendages

    NASA Astrophysics Data System (ADS)

    Kojima, Hirohisa; Ieda, Shoko; Kasai, Shinya

    2014-08-01

    Underactuated control problems, such as the control of a space robot without actuators on the main body, have been widely investigated. However, few studies have examined attitude control problems of underactuated space robots equipped with a flexible appendage, such as solar panels. In order to suppress vibration in flexible appendages, a zero-vibration input-shaping technique was applied to the link motion of an underactuated planar space robot. However, because the vibrational frequency depends on the link angles, simple input-shaping control methods cannot sufficiently suppress the vibration. In this paper, the dependency of the vibrational frequency on the link angles is measured experimentally, and the time-delay interval of the input shaper is then tuned based on the frequency estimated from the link angles. The proposed control method is referred to as frequency-tuning input-shaped manifold-based switching control (frequency-tuning IS-MBSC). The experimental results reveal that frequency-tuning IS-MBSC is capable of controlling the link angles and the main body attitude to maintain the target angles and that the vibration suppression performance of the proposed frequency-tuning IS-MBSC is better than that of a non-tuning IS-MBSC, which does not take the frequency variation into consideration.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Laros III, James H.; DeBonis, David; Grant, Ryan

    Measuring and controlling the power and energy consumption of high performance computing systems by various components in the software stack is an active research area [13, 3, 5, 10, 4, 21, 19, 16, 7, 17, 20, 18, 11, 1, 6, 14, 12]. Implementations in lower level software layers are beginning to emerge in some production systems, which is very welcome. To be most effective, a portable interface to measurement and control features would significantly facilitate participation by all levels of the software stack. We present a proposal for a standard power Application Programming Interface (API) that endeavors to cover themore » entire software space, from generic hardware interfaces to the input from the computer facility manager.« less

  4. Turtle 24.0 diffusion depletion code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Altomare, S.; Barry, R.F.

    1971-09-01

    TURTLE is a two-group, two-dimensional (x-y, x-z, r-z) neutron diffusion code featuring a direct treatment of the nonlinear effects of xenon, enthalpy, and Doppler. Fuel depletion is allowed. TURTLE was written for the study of azimuthal xenon oscillations, but the code is useful for general analysis. The input is simple, fuel management is handled directly, and a boron criticality search is allowed. Ten thousand space points are allowed (over 20,000 with diagonal symmetry). TURTLE is written in FORTRAN IV and is tailored for the present CDC-6600. The program is corecontained. Provision is made to save data on tape for futuremore » reference. ( auth)« less

  5. A methodology for designing robust multivariable nonlinear control systems. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Grunberg, D. B.

    1986-01-01

    A new methodology is described for the design of nonlinear dynamic controllers for nonlinear multivariable systems providing guarantees of closed-loop stability, performance, and robustness. The methodology is an extension of the Linear-Quadratic-Gaussian with Loop-Transfer-Recovery (LQG/LTR) methodology for linear systems, thus hinging upon the idea of constructing an approximate inverse operator for the plant. A major feature of the methodology is a unification of both the state-space and input-output formulations. In addition, new results on stability theory, nonlinear state estimation, and optimal nonlinear regulator theory are presented, including the guaranteed global properties of the extended Kalman filter and optimal nonlinear regulators.

  6. Self-supervised ARTMAP.

    PubMed

    Amis, Gregory P; Carpenter, Gail A

    2010-03-01

    Computational models of learning typically train on labeled input patterns (supervised learning), unlabeled input patterns (unsupervised learning), or a combination of the two (semi-supervised learning). In each case input patterns have a fixed number of features throughout training and testing. Human and machine learning contexts present additional opportunities for expanding incomplete knowledge from formal training, via self-directed learning that incorporates features not previously experienced. This article defines a new self-supervised learning paradigm to address these richer learning contexts, introducing a neural network called self-supervised ARTMAP. Self-supervised learning integrates knowledge from a teacher (labeled patterns with some features), knowledge from the environment (unlabeled patterns with more features), and knowledge from internal model activation (self-labeled patterns). Self-supervised ARTMAP learns about novel features from unlabeled patterns without destroying partial knowledge previously acquired from labeled patterns. A category selection function bases system predictions on known features, and distributed network activation scales unlabeled learning to prediction confidence. Slow distributed learning on unlabeled patterns focuses on novel features and confident predictions, defining classification boundaries that were ambiguous in the labeled patterns. Self-supervised ARTMAP improves test accuracy on illustrative low-dimensional problems and on high-dimensional benchmarks. Model code and benchmark data are available from: http://techlab.eu.edu/SSART/. Copyright 2009 Elsevier Ltd. All rights reserved.

  7. Ionospheric Tomography Using Faraday Rotation of Automatic Dependant Surveillance Broadcast UHF Signals

    NASA Astrophysics Data System (ADS)

    Cushley, A. C.

    2013-12-01

    The proposed launch of a satellite carrying the first space-borne ADS-B receiver by the Royal Military College of Canada (RMCC) will create a unique opportunity to study the modification of the 1090 MHz radio waves following propagation through the ionosphere from the transmitting aircraft to the passive satellite receiver(s). Experimental work successfully demonstrated that ADS-B data can be used to reconstruct two dimensional (2D) electron density maps of the ionosphere using computerized tomography (CT). The goal of this work is to evaluate the feasibility of CT reconstruction. The data is modelled using Ray-tracing techniques. This allows us to determine the characteristics of individual waves, including the wave path and the state of polarization at the satellite receiver. The modelled Faraday rotation (FR) is determined and converted to total electron content (TEC) along the ray-paths. The resulting TEC is used as input for computerized ionospheric tomography (CIT) using algebraic reconstruction technique (ART). This study concentrated on meso-scale structures 100-1000 km in horizontal extent. The primary scientific interest of this thesis was to show the feasibility of a new method to image the ionosphere and obtain a better understanding of magneto-ionic wave propagation. Multiple feature input electron density profile to ray-tracing program. Top: reconstructed relative electron density map of ray-trace input (Fig. 1) using TEC measurements and line-of-sight path. Bottom: reconstructed electron density map of ray-trace input using quiet background a priori estimate.

  8. Space station Simulation Computer System (SCS) study for NASA/MSFC. Volume 3: Refined conceptual design report

    NASA Technical Reports Server (NTRS)

    1989-01-01

    The results of the refined conceptual design phase (task 5) of the Simulation Computer System (SCS) study are reported. The SCS is the computational portion of the Payload Training Complex (PTC) providing simulation based training on payload operations of the Space Station Freedom (SSF). In task 4 of the SCS study, the range of architectures suitable for the SCS was explored. Identified system architectures, along with their relative advantages and disadvantages for SCS, were presented in the Conceptual Design Report. Six integrated designs-combining the most promising features from the architectural formulations-were additionally identified in the report. The six integrated designs were evaluated further to distinguish the more viable designs to be refined as conceptual designs. The three designs that were selected represent distinct approaches to achieving a capable and cost effective SCS configuration for the PTC. Here, the results of task 4 (input to this task) are briefly reviewed. Then, prior to describing individual conceptual designs, the PTC facility configuration and the SSF systems architecture that must be supported by the SCS are reviewed. Next, basic features of SCS implementation that have been incorporated into all selected SCS designs are considered. The details of the individual SCS designs are then presented before making a final comparison of the three designs.

  9. Curvilinear component analysis: a self-organizing neural network for nonlinear mapping of data sets.

    PubMed

    Demartines, P; Herault, J

    1997-01-01

    We present a new strategy called "curvilinear component analysis" (CCA) for dimensionality reduction and representation of multidimensional data sets. The principle of CCA is a self-organized neural network performing two tasks: vector quantization (VQ) of the submanifold in the data set (input space); and nonlinear projection (P) of these quantizing vectors toward an output space, providing a revealing unfolding of the submanifold. After learning, the network has the ability to continuously map any new point from one space into another: forward mapping of new points in the input space, or backward mapping of an arbitrary position in the output space.

  10. Marginal Space Deep Learning: Efficient Architecture for Volumetric Image Parsing.

    PubMed

    Ghesu, Florin C; Krubasik, Edward; Georgescu, Bogdan; Singh, Vivek; Yefeng Zheng; Hornegger, Joachim; Comaniciu, Dorin

    2016-05-01

    Robust and fast solutions for anatomical object detection and segmentation support the entire clinical workflow from diagnosis, patient stratification, therapy planning, intervention and follow-up. Current state-of-the-art techniques for parsing volumetric medical image data are typically based on machine learning methods that exploit large annotated image databases. Two main challenges need to be addressed, these are the efficiency in scanning high-dimensional parametric spaces and the need for representative image features which require significant efforts of manual engineering. We propose a pipeline for object detection and segmentation in the context of volumetric image parsing, solving a two-step learning problem: anatomical pose estimation and boundary delineation. For this task we introduce Marginal Space Deep Learning (MSDL), a novel framework exploiting both the strengths of efficient object parametrization in hierarchical marginal spaces and the automated feature design of Deep Learning (DL) network architectures. In the 3D context, the application of deep learning systems is limited by the very high complexity of the parametrization. More specifically 9 parameters are necessary to describe a restricted affine transformation in 3D, resulting in a prohibitive amount of billions of scanning hypotheses. The mechanism of marginal space learning provides excellent run-time performance by learning classifiers in clustered, high-probability regions in spaces of gradually increasing dimensionality. To further increase computational efficiency and robustness, in our system we learn sparse adaptive data sampling patterns that automatically capture the structure of the input. Given the object localization, we propose a DL-based active shape model to estimate the non-rigid object boundary. Experimental results are presented on the aortic valve in ultrasound using an extensive dataset of 2891 volumes from 869 patients, showing significant improvements of up to 45.2% over the state-of-the-art. To our knowledge, this is the first successful demonstration of the DL potential to detection and segmentation in full 3D data with parametrized representations.

  11. [Severity classification of chronic obstructive pulmonary disease based on deep learning].

    PubMed

    Ying, Jun; Yang, Ceyuan; Li, Quanzheng; Xue, Wanguo; Li, Tanshi; Cao, Wenzhe

    2017-12-01

    In this paper, a deep learning method has been raised to build an automatic classification algorithm of severity of chronic obstructive pulmonary disease. Large sample clinical data as input feature were analyzed for their weights in classification. Through feature selection, model training, parameter optimization and model testing, a classification prediction model based on deep belief network was built to predict severity classification criteria raised by the Global Initiative for Chronic Obstructive Lung Disease (GOLD). We get accuracy over 90% in prediction for two different standardized versions of severity criteria raised in 2007 and 2011 respectively. Moreover, we also got the contribution ranking of different input features through analyzing the model coefficient matrix and confirmed that there was a certain degree of agreement between the more contributive input features and the clinical diagnostic knowledge. The validity of the deep belief network model was proved by this result. This study provides an effective solution for the application of deep learning method in automatic diagnostic decision making.

  12. Comparison of space radiation calculations for deterministic and Monte Carlo transport codes

    NASA Astrophysics Data System (ADS)

    Lin, Zi-Wei; Adams, James; Barghouty, Abdulnasser; Randeniya, Sharmalee; Tripathi, Ram; Watts, John; Yepes, Pablo

    For space radiation protection of astronauts or electronic equipments, it is necessary to develop and use accurate radiation transport codes. Radiation transport codes include deterministic codes, such as HZETRN from NASA and UPROP from the Naval Research Laboratory, and Monte Carlo codes such as FLUKA, the Geant4 toolkit and HETC-HEDS. The deterministic codes and Monte Carlo codes complement each other in that deterministic codes are very fast while Monte Carlo codes are more elaborate. Therefore it is important to investigate how well the results of deterministic codes compare with those of Monte Carlo transport codes and where they differ. In this study we evaluate these different codes in their space radiation applications by comparing their output results in the same given space radiation environments, shielding geometry and material. Typical space radiation environments such as the 1977 solar minimum galactic cosmic ray environment are used as the well-defined input, and simple geometries made of aluminum, water and/or polyethylene are used to represent the shielding material. We then compare various outputs of these codes, such as the dose-depth curves and the flux spectra of different fragments and other secondary particles. These comparisons enable us to learn more about the main differences between these space radiation transport codes. At the same time, they help us to learn the qualitative and quantitative features that these transport codes have in common.

  13. Utilizing Hierarchical Clustering to improve Efficiency of Self-Organizing Feature Map to Identify Hydrological Homogeneous Regions

    NASA Astrophysics Data System (ADS)

    Farsadnia, Farhad; Ghahreman, Bijan

    2016-04-01

    Hydrologic homogeneous group identification is considered both fundamental and applied research in hydrology. Clustering methods are among conventional methods to assess the hydrological homogeneous regions. Recently, Self-Organizing feature Map (SOM) method has been applied in some studies. However, the main problem of this method is the interpretation on the output map of this approach. Therefore, SOM is used as input to other clustering algorithms. The aim of this study is to apply a two-level Self-Organizing feature map and Ward hierarchical clustering method to determine the hydrologic homogenous regions in North and Razavi Khorasan provinces. At first by principal component analysis, we reduced SOM input matrix dimension, then the SOM was used to form a two-dimensional features map. To determine homogeneous regions for flood frequency analysis, SOM output nodes were used as input into the Ward method. Generally, the regions identified by the clustering algorithms are not statistically homogeneous. Consequently, they have to be adjusted to improve their homogeneity. After adjustment of the homogeneity regions by L-moment tests, five hydrologic homogeneous regions were identified. Finally, adjusted regions were created by a two-level SOM and then the best regional distribution function and associated parameters were selected by the L-moment approach. The results showed that the combination of self-organizing maps and Ward hierarchical clustering by principal components as input is more effective than the hierarchical method, by principal components or standardized inputs to achieve hydrologic homogeneous regions.

  14. New feature extraction method for classification of agricultural products from x-ray images

    NASA Astrophysics Data System (ADS)

    Talukder, Ashit; Casasent, David P.; Lee, Ha-Woon; Keagy, Pamela M.; Schatzki, Thomas F.

    1999-01-01

    Classification of real-time x-ray images of randomly oriented touching pistachio nuts is discussed. The ultimate objective is the development of a system for automated non- invasive detection of defective product items on a conveyor belt. We discuss the extraction of new features that allow better discrimination between damaged and clean items. This feature extraction and classification stage is the new aspect of this paper; our new maximum representation and discrimination between damaged and clean items. This feature extraction and classification stage is the new aspect of this paper; our new maximum representation and discriminating feature (MRDF) extraction method computes nonlinear features that are used as inputs to a new modified k nearest neighbor classifier. In this work the MRDF is applied to standard features. The MRDF is robust to various probability distributions of the input class and is shown to provide good classification and new ROC data.

  15. A space transportation system operations model

    NASA Technical Reports Server (NTRS)

    Morris, W. Douglas; White, Nancy H.

    1987-01-01

    Presented is a description of a computer program which permits assessment of the operational support requirements of space transportation systems functioning in both a ground- and space-based environment. The scenario depicted provides for the delivery of payloads from Earth to a space station and beyond using upper stages based at the station. Model results are scenario dependent and rely on the input definitions of delivery requirements, task times, and available resources. Output is in terms of flight rate capabilities, resource requirements, and facility utilization. A general program description, program listing, input requirements, and sample output are included.

  16. Vestibular response to pseudorandom angular velocity input: progress report.

    PubMed

    Lessard, C S; Wong, W C

    1987-09-01

    Space motion sickness was not reported during the first Apollo missions; however, since Apollo 8 through the current Shuttle and Skylab missions, approximately 50% of the crewmembers have experienced instances of space motion sickness. One of NASA's efforts to resolve the space adaptation syndrome is to model the vestibular response for both basic knowledge and as a possible predictor of an individual's susceptibility to the disorder. This report describes a method to analyze the vestibular system when subjected to a pseudorandom angular velocity input.

  17. Deep Learning Based Binaural Speech Separation in Reverberant Environments.

    PubMed

    Zhang, Xueliang; Wang, DeLiang

    2017-05-01

    Speech signal is usually degraded by room reverberation and additive noises in real environments. This paper focuses on separating target speech signal in reverberant conditions from binaural inputs. Binaural separation is formulated as a supervised learning problem, and we employ deep learning to map from both spatial and spectral features to a training target. With binaural inputs, we first apply a fixed beamformer and then extract several spectral features. A new spatial feature is proposed and extracted to complement the spectral features. The training target is the recently suggested ideal ratio mask. Systematic evaluations and comparisons show that the proposed system achieves very good separation performance and substantially outperforms related algorithms under challenging multi-source and reverberant environments.

  18. Robust skin color-based moving object detection for video surveillance

    NASA Astrophysics Data System (ADS)

    Kaliraj, Kalirajan; Manimaran, Sudha

    2016-07-01

    Robust skin color-based moving object detection for video surveillance is proposed. The objective of the proposed algorithm is to detect and track the target under complex situations. The proposed framework comprises four stages, which include preprocessing, skin color-based feature detection, feature classification, and target localization and tracking. In the preprocessing stage, the input image frame is smoothed using averaging filter and transformed into YCrCb color space. In skin color detection, skin color regions are detected using Otsu's method of global thresholding. In the feature classification, histograms of both skin and nonskin regions are constructed and the features are classified into foregrounds and backgrounds based on Bayesian skin color classifier. The foreground skin regions are localized by a connected component labeling process. Finally, the localized foreground skin regions are confirmed as a target by verifying the region properties, and nontarget regions are rejected using the Euler method. At last, the target is tracked by enclosing the bounding box around the target region in all video frames. The experiment was conducted on various publicly available data sets and the performance was evaluated with baseline methods. It evidently shows that the proposed algorithm works well against slowly varying illumination, target rotations, scaling, fast, and abrupt motion changes.

  19. Long-term scale adaptive tracking with kernel correlation filters

    NASA Astrophysics Data System (ADS)

    Wang, Yueren; Zhang, Hong; Zhang, Lei; Yang, Yifan; Sun, Mingui

    2018-04-01

    Object tracking in video sequences has broad applications in both military and civilian domains. However, as the length of input video sequence increases, a number of problems arise, such as severe object occlusion, object appearance variation, and object out-of-view (some portion or the entire object leaves the image space). To deal with these problems and identify the object being tracked from cluttered background, we present a robust appearance model using Speeded Up Robust Features (SURF) and advanced integrated features consisting of the Felzenszwalb's Histogram of Oriented Gradients (FHOG) and color attributes. Since re-detection is essential in long-term tracking, we develop an effective object re-detection strategy based on moving area detection. We employ the popular kernel correlation filters in our algorithm design, which facilitates high-speed object tracking. Our evaluation using the CVPR2013 Object Tracking Benchmark (OTB2013) dataset illustrates that the proposed algorithm outperforms reference state-of-the-art trackers in various challenging scenarios.

  20. Computer program for analysis of coupled-cavity traveling wave tubes

    NASA Technical Reports Server (NTRS)

    Connolly, D. J.; Omalley, T. A.

    1977-01-01

    A flexible, accurate, large signal computer program was developed for the design of coupled cavity traveling wave tubes. The program is written in FORTRAN IV for an IBM 360/67 time sharing system. The beam is described by a disk model and the slow wave structure by a sequence of cavities, or cells. The computational approach is arranged so that each cavity may have geometrical or electrical parameters different from those of its neighbors. This allows the program user to simulate a tube of almost arbitrary complexity. Input and output couplers, severs, complicated velocity tapers, and other features peculiar to one or a few cavities may be modeled by a correct choice of input data. The beam-wave interaction is handled by an approach in which the radio frequency fields are expanded in solutions to the transverse magnetic wave equation. All significant space harmonics are retained. The program was used to perform a design study of the traveling-wave tube developed for the Communications Technology Satellite. Good agreement was obtained between the predictions of the program and the measured performance of the flight tube.

  1. A mathematical model of neuro-fuzzy approximation in image classification

    NASA Astrophysics Data System (ADS)

    Gopalan, Sasi; Pinto, Linu; Sheela, C.; Arun Kumar M., N.

    2016-06-01

    Image digitization and explosion of World Wide Web has made traditional search for image, an inefficient method for retrieval of required grassland image data from large database. For a given input query image Content-Based Image Retrieval (CBIR) system retrieves the similar images from a large database. Advances in technology has increased the use of grassland image data in diverse areas such has agriculture, art galleries, education, industry etc. In all the above mentioned diverse areas it is necessary to retrieve grassland image data efficiently from a large database to perform an assigned task and to make a suitable decision. A CBIR system based on grassland image properties and it uses the aid of a feed-forward back propagation neural network for an effective image retrieval is proposed in this paper. Fuzzy Memberships plays an important role in the input space of the proposed system which leads to a combined neural fuzzy approximation in image classification. The CBIR system with mathematical model in the proposed work gives more clarity about fuzzy-neuro approximation and the convergence of the image features in a grassland image.

  2. An incremental approach to genetic-algorithms-based classification.

    PubMed

    Guan, Sheng-Uei; Zhu, Fangming

    2005-04-01

    Incremental learning has been widely addressed in the machine learning literature to cope with learning tasks where the learning environment is ever changing or training samples become available over time. However, most research work explores incremental learning with statistical algorithms or neural networks, rather than evolutionary algorithms. The work in this paper employs genetic algorithms (GAs) as basic learning algorithms for incremental learning within one or more classifier agents in a multiagent environment. Four new approaches with different initialization schemes are proposed. They keep the old solutions and use an "integration" operation to integrate them with new elements to accommodate new attributes, while biased mutation and crossover operations are adopted to further evolve a reinforced solution. The simulation results on benchmark classification data sets show that the proposed approaches can deal with the arrival of new input attributes and integrate them with the original input space. It is also shown that the proposed approaches can be successfully used for incremental learning and improve classification rates as compared to the retraining GA. Possible applications for continuous incremental training and feature selection are also discussed.

  3. Hybrid clustering based fuzzy structure for vibration control - Part 1: A novel algorithm for building neuro-fuzzy system

    NASA Astrophysics Data System (ADS)

    Nguyen, Sy Dzung; Nguyen, Quoc Hung; Choi, Seung-Bok

    2015-01-01

    This paper presents a new algorithm for building an adaptive neuro-fuzzy inference system (ANFIS) from a training data set called B-ANFIS. In order to increase accuracy of the model, the following issues are executed. Firstly, a data merging rule is proposed to build and perform a data-clustering strategy. Subsequently, a combination of clustering processes in the input data space and in the joint input-output data space is presented. Crucial reason of this task is to overcome problems related to initialization and contradictory fuzzy rules, which usually happen when building ANFIS. The clustering process in the input data space is accomplished based on a proposed merging-possibilistic clustering (MPC) algorithm. The effectiveness of this process is evaluated to resume a clustering process in the joint input-output data space. The optimal parameters obtained after completion of the clustering process are used to build ANFIS. Simulations based on a numerical data, 'Daily Data of Stock A', and measured data sets of a smart damper are performed to analyze and estimate accuracy. In addition, convergence and robustness of the proposed algorithm are investigated based on both theoretical and testing approaches.

  4. Maximizing lipocalin prediction through balanced and diversified training set and decision fusion.

    PubMed

    Nath, Abhigyan; Subbiah, Karthikeyan

    2015-12-01

    Lipocalins are short in sequence length and perform several important biological functions. These proteins are having less than 20% sequence similarity among paralogs. Experimentally identifying them is an expensive and time consuming process. The computational methods based on the sequence similarity for allocating putative members to this family are also far elusive due to the low sequence similarity existing among the members of this family. Consequently, the machine learning methods become a viable alternative for their prediction by using the underlying sequence/structurally derived features as the input. Ideally, any machine learning based prediction method must be trained with all possible variations in the input feature vector (all the sub-class input patterns) to achieve perfect learning. A near perfect learning can be achieved by training the model with diverse types of input instances belonging to the different regions of the entire input space. Furthermore, the prediction performance can be improved through balancing the training set as the imbalanced data sets will tend to produce the prediction bias towards majority class and its sub-classes. This paper is aimed to achieve (i) the high generalization ability without any classification bias through the diversified and balanced training sets as well as (ii) enhanced the prediction accuracy by combining the results of individual classifiers with an appropriate fusion scheme. Instead of creating the training set randomly, we have first used the unsupervised Kmeans clustering algorithm to create diversified clusters of input patterns and created the diversified and balanced training set by selecting an equal number of patterns from each of these clusters. Finally, probability based classifier fusion scheme was applied on boosted random forest algorithm (which produced greater sensitivity) and K nearest neighbour algorithm (which produced greater specificity) to achieve the enhanced predictive performance than that of individual base classifiers. The performance of the learned models trained on Kmeans preprocessed training set is far better than the randomly generated training sets. The proposed method achieved a sensitivity of 90.6%, specificity of 91.4% and accuracy of 91.0% on the first test set and sensitivity of 92.9%, specificity of 96.2% and accuracy of 94.7% on the second blind test set. These results have established that diversifying training set improves the performance of predictive models through superior generalization ability and balancing the training set improves prediction accuracy. For smaller data sets, unsupervised Kmeans based sampling can be an effective technique to increase generalization than that of the usual random splitting method. Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. Going beyond Input Quantity: "Wh"-Questions Matter for Toddlers' Language and Cognitive Development

    ERIC Educational Resources Information Center

    Rowe, Meredith L.; Leech, Kathryn A.; Cabrera, Natasha

    2017-01-01

    There are clear associations between the overall quantity of input children are exposed to and their vocabulary acquisition. However, by uncovering specific features of the input that matter, we can better understand the mechanisms involved in vocabulary learning. We examine whether exposure to "wh"-questions, a challenging quality of…

  6. Effects of Textual Enhancement and Input Enrichment on L2 Development

    ERIC Educational Resources Information Center

    Rassaei, Ehsan

    2015-01-01

    Research on second language (L2) acquisition has recently sought to include formal instruction into second and foreign language classrooms in a more unobtrusive and implicit manner. Textual enhancement and input enrichment are two techniques which are aimed at drawing learners' attention to specific linguistic features in input and at the same…

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Zhao; Chen-Wiegart, Yu-chen K.; Wang, Jun

    Three-phase three-dimensional (3D) microstructural reconstructions of lithium-ion battery electrodes are critical input for 3D simulations of electrode lithiation/delithiation, which provide a detailed understanding of battery operation. In this report, 3D images of a LiCoO 2electrode are achieved using focused ion beam-scanning electron microscopy (FIB-SEM), with clear contrast among the three phases: LiCoO 2particles, carbonaceous phases (carbon and binder) and the electrolyte space. The good contrast was achieved by utilizing an improved FIB-SEM sample preparation method that combined infiltration of the electrolyte space with a low-viscosity silicone resin and triple ion-beam polishing. Morphological parameters quantified include phase volume fraction, surface area,more » feature size distribution, connectivity, and tortuosity. Electrolyte tortuosity was determined using two different geometric calculations that were in good agreement. In conclusion, the electrolyte tortuosity distribution versus position within the electrode was found to be highly inhomogeneous; this will lead to inhomogeneous electrode lithiation/delithiation at high C-rates that could potentially cause battery degradation.« less

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Zhao; Chen-Wiegart, Yu-chen K.; Wang, Jun

    Abstract Three-phase three-dimensional (3D) microstructural reconstructions of lithium-ion battery electrodes are critical input for 3D simulations of electrode lithiation/delithiation, which provide a detailed understanding of battery operation. In this report, 3D images of a LiCoO 2electrode are achieved using focused ion beam-scanning electron microscopy (FIB-SEM), with clear contrast among the three phases: LiCoO 2particles, carbonaceous phases (carbon and binder) and the electrolyte space. The good contrast was achieved by utilizing an improved FIB-SEM sample preparation method that combined infiltration of the electrolyte space with a low-viscosity silicone resin and triple ion-beam polishing. Morphological parameters quantified include phase volume fraction, surfacemore » area, feature size distribution, connectivity, and tortuosity. Electrolyte tortuosity was determined using two different geometric calculations that were in good agreement. The electrolyte tortuosity distribution versus position within the electrode was found to be highly inhomogeneous; this will lead to inhomogeneous electrode lithiation/delithiation at high C-rates that could potentially cause battery degradation.« less

  9. How Does the Low-Rank Matrix Decomposition Help Internal and External Learnings for Super-Resolution.

    PubMed

    Wang, Shuang; Yue, Bo; Liang, Xuefeng; Jiao, Licheng

    2018-03-01

    Wisely utilizing the internal and external learning methods is a new challenge in super-resolution problem. To address this issue, we analyze the attributes of two methodologies and find two observations of their recovered details: 1) they are complementary in both feature space and image plane and 2) they distribute sparsely in the spatial space. These inspire us to propose a low-rank solution which effectively integrates two learning methods and then achieves a superior result. To fit this solution, the internal learning method and the external learning method are tailored to produce multiple preliminary results. Our theoretical analysis and experiment prove that the proposed low-rank solution does not require massive inputs to guarantee the performance, and thereby simplifying the design of two learning methods for the solution. Intensive experiments show the proposed solution improves the single learning method in both qualitative and quantitative assessments. Surprisingly, it shows more superior capability on noisy images and outperforms state-of-the-art methods.

  10. On the use of feature selection to improve the detection of sea oil spills in SAR images

    NASA Astrophysics Data System (ADS)

    Mera, David; Bolon-Canedo, Veronica; Cotos, J. M.; Alonso-Betanzos, Amparo

    2017-03-01

    Fast and effective oil spill detection systems are crucial to ensure a proper response to environmental emergencies caused by hydrocarbon pollution on the ocean's surface. Typically, these systems uncover not only oil spills, but also a high number of look-alikes. The feature extraction is a critical and computationally intensive phase where each detected dark spot is independently examined. Traditionally, detection systems use an arbitrary set of features to discriminate between oil spills and look-alikes phenomena. However, Feature Selection (FS) methods based on Machine Learning (ML) have proved to be very useful in real domains for enhancing the generalization capabilities of the classifiers, while discarding the existing irrelevant features. In this work, we present a generic and systematic approach, based on FS methods, for choosing a concise and relevant set of features to improve the oil spill detection systems. We have compared five FS methods: Correlation-based feature selection (CFS), Consistency-based filter, Information Gain, ReliefF and Recursive Feature Elimination for Support Vector Machine (SVM-RFE). They were applied on a 141-input vector composed of features from a collection of outstanding studies. Selected features were validated via a Support Vector Machine (SVM) classifier and the results were compared with previous works. Test experiments revealed that the classifier trained with the 6-input feature vector proposed by SVM-RFE achieved the best accuracy and Cohen's kappa coefficient (87.1% and 74.06% respectively). This is a smaller feature combination with similar or even better classification accuracy than previous works. The presented finding allows to speed up the feature extraction phase without reducing the classifier accuracy. Experiments also confirmed the significance of the geometrical features since 75.0% of the different features selected by the applied FS methods as well as 66.67% of the proposed 6-input feature vector belong to this category.

  11. Synaptic State Matching: A Dynamical Architecture for Predictive Internal Representation and Feature Detection

    PubMed Central

    Tavazoie, Saeed

    2013-01-01

    Here we explore the possibility that a core function of sensory cortex is the generation of an internal simulation of sensory environment in real-time. A logical elaboration of this idea leads to a dynamical neural architecture that oscillates between two fundamental network states, one driven by external input, and the other by recurrent synaptic drive in the absence of sensory input. Synaptic strength is modified by a proposed synaptic state matching (SSM) process that ensures equivalence of spike statistics between the two network states. Remarkably, SSM, operating locally at individual synapses, generates accurate and stable network-level predictive internal representations, enabling pattern completion and unsupervised feature detection from noisy sensory input. SSM is a biologically plausible substrate for learning and memory because it brings together sequence learning, feature detection, synaptic homeostasis, and network oscillations under a single unifying computational framework. PMID:23991161

  12. The building loads analysis system thermodynamics (BLAST) program, Version 2. 0: input booklet. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sowell, E.

    1979-06-01

    The Building Loads Analysis and System Thermodynamics (BLAST) program is a comprehensive set of subprograms for predicting energy consumption in buildings. There are three major subprograms: (1) the space load predicting subprogram, which computes hourly space loads in a building or zone based on user input and hourly weather data; (2) the air distribution system simulation subprogram, which uses the computed space load and user inputs describing the building air-handling system to calculate hot water or steam, chilled water, and electric energy demands; and (3) the central plant simulation program, which simulates boilers, chillers, onsite power generating equipment and solarmore » energy systems and computes monthly and annual fuel and electrical power consumption and plant life cycle cost.« less

  13. Invariant-feature-based adaptive automatic target recognition in obscured 3D point clouds

    NASA Astrophysics Data System (ADS)

    Khuon, Timothy; Kershner, Charles; Mattei, Enrico; Alverio, Arnel; Rand, Robert

    2014-06-01

    Target recognition and classification in a 3D point cloud is a non-trivial process due to the nature of the data collected from a sensor system. The signal can be corrupted by noise from the environment, electronic system, A/D converter, etc. Therefore, an adaptive system with a desired tolerance is required to perform classification and recognition optimally. The feature-based pattern recognition algorithm architecture as described below is particularly devised for solving a single-sensor classification non-parametrically. Feature set is extracted from an input point cloud, normalized, and classifier a neural network classifier. For instance, automatic target recognition in an urban area would require different feature sets from one in a dense foliage area. The figure above (see manuscript) illustrates the architecture of the feature based adaptive signature extraction of 3D point cloud including LIDAR, RADAR, and electro-optical data. This network takes a 3D cluster and classifies it into a specific class. The algorithm is a supervised and adaptive classifier with two modes: the training mode and the performing mode. For the training mode, a number of novel patterns are selected from actual or artificial data. A particular 3D cluster is input to the network as shown above for the decision class output. The network consists of three sequential functional modules. The first module is for feature extraction that extracts the input cluster into a set of singular value features or feature vector. Then the feature vector is input into the feature normalization module to normalize and balance it before being fed to the neural net classifier for the classification. The neural net can be trained by actual or artificial novel data until each trained output reaches the declared output within the defined tolerance. In case new novel data is added after the neural net has been learned, the training is then resumed until the neural net has incrementally learned with the new novel data. The associative memory capability of the neural net enables the incremental learning. The back propagation algorithm or support vector machine can be utilized for the classification and recognition.

  14. Recursive Feature Extraction in Graphs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2014-08-14

    ReFeX extracts recursive topological features from graph data. The input is a graph as a csv file and the output is a csv file containing feature values for each node in the graph. The features are based on topological counts in the neighborhoods of each nodes, as well as recursive summaries of neighbors' features.

  15. An interactive framework for acquiring vision models of 3-D objects from 2-D images.

    PubMed

    Motai, Yuichi; Kak, Avinash

    2004-02-01

    This paper presents a human-computer interaction (HCI) framework for building vision models of three-dimensional (3-D) objects from their two-dimensional (2-D) images. Our framework is based on two guiding principles of HCI: 1) provide the human with as much visual assistance as possible to help the human make a correct input; and 2) verify each input provided by the human for its consistency with the inputs previously provided. For example, when stereo correspondence information is elicited from a human, his/her job is facilitated by superimposing epipolar lines on the images. Although that reduces the possibility of error in the human marked correspondences, such errors are not entirely eliminated because there can be multiple candidate points close together for complex objects. For another example, when pose-to-pose correspondence is sought from a human, his/her job is made easier by allowing the human to rotate the partial model constructed in the previous pose in relation to the partial model for the current pose. While this facility reduces the incidence of human-supplied pose-to-pose correspondence errors, such errors cannot be eliminated entirely because of confusion created when multiple candidate features exist close together. Each input provided by the human is therefore checked against the previous inputs by invoking situation-specific constraints. Different types of constraints (and different human-computer interaction protocols) are needed for the extraction of polygonal features and for the extraction of curved features. We will show results on both polygonal objects and object containing curved features.

  16. Principal polynomial analysis.

    PubMed

    Laparra, Valero; Jiménez, Sandra; Tuia, Devis; Camps-Valls, Gustau; Malo, Jesus

    2014-11-01

    This paper presents a new framework for manifold learning based on a sequence of principal polynomials that capture the possibly nonlinear nature of the data. The proposed Principal Polynomial Analysis (PPA) generalizes PCA by modeling the directions of maximal variance by means of curves, instead of straight lines. Contrarily to previous approaches, PPA reduces to performing simple univariate regressions, which makes it computationally feasible and robust. Moreover, PPA shows a number of interesting analytical properties. First, PPA is a volume-preserving map, which in turn guarantees the existence of the inverse. Second, such an inverse can be obtained in closed form. Invertibility is an important advantage over other learning methods, because it permits to understand the identified features in the input domain where the data has physical meaning. Moreover, it allows to evaluate the performance of dimensionality reduction in sensible (input-domain) units. Volume preservation also allows an easy computation of information theoretic quantities, such as the reduction in multi-information after the transform. Third, the analytical nature of PPA leads to a clear geometrical interpretation of the manifold: it allows the computation of Frenet-Serret frames (local features) and of generalized curvatures at any point of the space. And fourth, the analytical Jacobian allows the computation of the metric induced by the data, thus generalizing the Mahalanobis distance. These properties are demonstrated theoretically and illustrated experimentally. The performance of PPA is evaluated in dimensionality and redundancy reduction, in both synthetic and real datasets from the UCI repository.

  17. Learning dictionaries of sparse codes of 3D movements of body joints for real-time human activity understanding.

    PubMed

    Qi, Jin; Yang, Zhiyong

    2014-01-01

    Real-time human activity recognition is essential for human-robot interactions for assisted healthy independent living. Most previous work in this area is performed on traditional two-dimensional (2D) videos and both global and local methods have been used. Since 2D videos are sensitive to changes of lighting condition, view angle, and scale, researchers begun to explore applications of 3D information in human activity understanding in recently years. Unfortunately, features that work well on 2D videos usually don't perform well on 3D videos and there is no consensus on what 3D features should be used. Here we propose a model of human activity recognition based on 3D movements of body joints. Our method has three steps, learning dictionaries of sparse codes of 3D movements of joints, sparse coding, and classification. In the first step, space-time volumes of 3D movements of body joints are obtained via dense sampling and independent component analysis is then performed to construct a dictionary of sparse codes for each activity. In the second step, the space-time volumes are projected to the dictionaries and a set of sparse histograms of the projection coefficients are constructed as feature representations of the activities. Finally, the sparse histograms are used as inputs to a support vector machine to recognize human activities. We tested this model on three databases of human activities and found that it outperforms the state-of-the-art algorithms. Thus, this model can be used for real-time human activity recognition in many applications.

  18. Absorption spectrum of a two-level atom in a bad cavity with injected squeezed vacuum

    NASA Astrophysics Data System (ADS)

    Zhou, Peng; Swain, S.

    1996-02-01

    We study the absorption spectrum of a coherently driven two-level atom interacting with a resonant cavity mode which is coupled to a broadband squeezed vacuum through its input-output mirror in the bad cavity limit. We study the modification of the two-photon correlation strength of the injected squeezed vacuum inside the cavity, and show that the equations describing probe absorption in the cavity environment are formally identical to these in free space, but with modified parameters describing the squeezed vacuum. The two photon correlations induced by the squeezed vacuum are always weaker than in free space. We pay particular attention to the spectral behaviour at line centre in the region of intermediate trength driving intensities, where anomalous spectral features such as hole-burning and dispersive profiles are displayed. These unusual spectral features are very sensitive to the squeezing phase and the Rabi frequency of the driving field. We also derive the threshold value of the Rabi frequency which gives rise to the transparency of the probe beam at the driving frequency. When the Rabi frequency is less than the threshold value, the probe beam is absorbed, whilst the probe beam is amplified (without population inversion under certain conditions) when the Rabi frequency is larger than this threshold. The anomalous spectral features all take place in the vicinity of the critical point dividing the different dynamical regimes, probe absorption and amplification, of the atomic radiation. The physical origin of the strong amplification without population inversion, and the feasibility of observing it, are discussed.

  19. Interrelating meteorite and asteroid spectra at UV-Vis-NIR wavelengths using novel multiple-scattering methods

    NASA Astrophysics Data System (ADS)

    Martikainen, Julia; Penttilä, Antti; Gritsevich, Maria; Muinonen, Karri

    2017-10-01

    Asteroids have remained mostly the same for the past 4.5 billion years, and provide us information on the origin, evolution and current state of the Solar System. Asteroids and meteorites can be linked by matching their respective reflectance spectra. This is difficult, because spectral features depend strongly on the surface properties, and meteorite surfaces are free of regolith dust present in asteroids. Furthermore, asteroid surfaces experience space weathering which affects their spectral features.We present a novel simulation framework for assessing the spectral properties of meteorites and asteroids and matching their reflectance spectra. The simulations are carried out by utilizing a light-scattering code that takes inhomogeneous waves into account and simulates light scattering by Gaussian-random-sphere particles large compared to the wavelength of the incident light. The code uses incoherent input and computes phase matrices by utilizing incoherent scattering matrices. Reflectance spectra are modeled by combining olivine, pyroxene, and iron, the most common materials that dominate the spectral features of asteroids and meteorites. Space weathering is taken into account by adding nanoiron into the modeled asteroid spectrum. The complex refractive indices needed for the simulations are obtained from existing databases, or derived using an optimization that utilizes our ray-optics code and the measured spectrum of the material.We demonstrate our approach by applying it to the reflectance spectrum of (4) Vesta and the reflectance spectrum of the Johnstown meteorite measured with the University of Helsinki integrating-sphere UV-Vis-NIR spectrometer.Acknowledgments. The research is funded by the ERC Advanced Grant No. 320773 (SAEMPL).

  20. Exploration versus exploitation in space, mind, and society

    PubMed Central

    Hills, Thomas T.; Todd, Peter M.; Lazer, David; Redish, A. David; Couzin, Iain D.

    2015-01-01

    Search is a ubiquitous property of life. Although diverse domains have worked on search problems largely in isolation, recent trends across disciplines indicate that the formal properties of these problems share similar structures and, often, similar solutions. Moreover, internal search (e.g., memory search) shows similar characteristics to external search (e.g., spatial foraging), including shared neural mechanisms consistent with a common evolutionary origin across species. Search problems and their solutions also scale from individuals to societies, underlying and constraining problem solving, memory, information search, and scientific and cultural innovation. In summary, search represents a core feature of cognition, with a vast influence on its evolution and processes across contexts and requiring input from multiple domains to understand its implications and scope. PMID:25487706

  1. NECAP 4.1: NASA's Energy-Cost Analysis Program input manual

    NASA Technical Reports Server (NTRS)

    Jensen, R. N.

    1982-01-01

    The computer program NECAP (NASA's Energy Cost Analysis Program) is described. The program is a versatile building design and energy analysis tool which has embodied within it state of the art techniques for performing thermal load calculations and energy use predictions. With the program, comparisons of building designs and operational alternatives for new or existing buildings can be made. The major feature of the program is the response factor technique for calculating the heat transfer through the building surfaces which accounts for the building's mass. The program expands the response factor technique into a space response factor to account for internal building temperature swings; this is extremely important in determining true building loads and energy consumption when internal temperatures are allowed to swing.

  2. A high power, pulsed, microwave amplifier for a synthetique aperture radar electrical model. Phase 1: Design

    NASA Astrophysics Data System (ADS)

    Atkinson, J. E.; Barker, G. G.; Feltham, S. J.; Gabrielson, S.; Lane, P. C.; Matthews, V. J.; Perring, D.; Randall, J. P.; Saunders, J. W.; Tuck, R. A.

    1982-05-01

    An electrical model klystron amplifier was designed. Its features include a gridded gun, a single stage depressed collector, a rare earth permanent magnet focusing system, an input loop, six rugged tuners and a coaxial line output section incorporating a coaxial-to-waveguide transducer and a pillbox window. At each stage of the design, the thermal and mechanical aspects were investigated and optimized within the framework of the RF specification. Extensive use was made of data from the preliminary design study and from RF measurements on the breadboard model. In an additional study, a comprehensive draft tube specification has been produced. Great emphasis has been laid on a second additional study on space-qualified materials and processes.

  3. Chemical sensors are hybrid-input memristors

    NASA Astrophysics Data System (ADS)

    Sysoev, V. I.; Arkhipov, V. E.; Okotrub, A. V.; Pershin, Y. V.

    2018-04-01

    Memristors are two-terminal electronic devices whose resistance depends on the history of input signal (voltage or current). Here we demonstrate that the chemical gas sensors can be considered as memristors with a generalized (hybrid) input, namely, with the input consisting of the voltage, analyte concentrations and applied temperature. The concept of hybrid-input memristors is demonstrated experimentally using a single-walled carbon nanotubes chemical sensor. It is shown that with respect to the hybrid input, the sensor exhibits some features common with memristors such as the hysteretic input-output characteristics. This different perspective on chemical gas sensors may open new possibilities for smart sensor applications.

  4. Ergodic channel capacity of spatial correlated multiple-input multiple-output free space optical links using multipulse pulse-position modulation

    NASA Astrophysics Data System (ADS)

    Wang, Huiqin; Wang, Xue; Cao, Minghua

    2017-02-01

    The spatial correlation extensively exists in the multiple-input multiple-output (MIMO) free space optical (FSO) communication systems due to the channel fading and the antenna space limitation. Wilkinson's method was utilized to investigate the impact of spatial correlation on the MIMO FSO communication system employing multipulse pulse-position modulation. Simulation results show that the existence of spatial correlation reduces the ergodic channel capacity, and the reception diversity is more competent to resist this kind of performance degradation.

  5. Method of generating features optimal to a dataset and classifier

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bruillard, Paul J.; Gosink, Luke J.; Jarman, Kenneth D.

    A method of generating features optimal to a particular dataset and classifier is disclosed. A dataset of messages is inputted and a classifier is selected. An algebra of features is encoded. Computable features that are capable of describing the dataset from the algebra of features are selected. Irredundant features that are optimal for the classifier and the dataset are selected.

  6. A comparative study of linear and nonlinear anomaly detectors for hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Goldberg, Hirsh; Nasrabadi, Nasser M.

    2007-04-01

    In this paper we implement various linear and nonlinear subspace-based anomaly detectors for hyperspectral imagery. First, a dual window technique is used to separate the local area around each pixel into two regions - an inner-window region (IWR) and an outer-window region (OWR). Pixel spectra from each region are projected onto a subspace which is defined by projection bases that can be generated in several ways. Here we use three common pattern classification techniques (Principal Component Analysis (PCA), Fisher Linear Discriminant (FLD) Analysis, and the Eigenspace Separation Transform (EST)) to generate projection vectors. In addition to these three algorithms, the well-known Reed-Xiaoli (RX) anomaly detector is also implemented. Each of the four linear methods is then implicitly defined in a high- (possibly infinite-) dimensional feature space by using a nonlinear mapping associated with a kernel function. Using a common machine-learning technique known as the kernel trick all dot products in the feature space are replaced with a Mercer kernel function defined in terms of the original input data space. To determine how anomalous a given pixel is, we then project the current test pixel spectra and the spectral mean vector of the OWR onto the linear and nonlinear projection vectors in order to exploit the statistical differences between the IWR and OWR pixels. Anomalies are detected if the separation of the projection of the current test pixel spectra and the OWR mean spectra are greater than a certain threshold. Comparisons are made using receiver operating characteristics (ROC) curves.

  7. Ring rolling process simulation for microstructure optimization

    NASA Astrophysics Data System (ADS)

    Franchi, Rodolfo; Del Prete, Antonio; Donatiello, Iolanda; Calabrese, Maurizio

    2017-10-01

    Metal undergoes complicated microstructural evolution during Hot Ring Rolling (HRR), which determines the quality, mechanical properties and life of the ring formed. One of the principal microstructure properties which mostly influences the structural performances of forged components, is the value of the average grain size. In the present paper a ring rolling process has been studied and optimized in order to obtain anular components to be used in aerospace applications. In particular, the influence of process input parameters (feed rate of the mandrel and angular velocity of driver roll) on microstructural and on geometrical features of the final ring has been evaluated. For this purpose, a three-dimensional finite element model for HRR has been developed in SFTC DEFORM V11, taking into account also microstructural development of the material used (the nickel superalloy Waspalloy). The Finite Element (FE) model has been used to formulate a proper optimization problem. The optimization procedure has been developed in order to find the combination of process parameters which allows to minimize the average grain size. The Response Surface Methodology (RSM) has been used to find the relationship between input and output parameters, by using the exact values of output parameters in the control points of a design space explored through FEM simulation. Once this relationship is known, the values of the output parameters can be calculated for each combination of the input parameters. Then, an optimization procedure based on Genetic Algorithms has been applied. At the end, the minimum value of average grain size with respect to the input parameters has been found.

  8. A novel approach for fire recognition using hybrid features and manifold learning-based classifier

    NASA Astrophysics Data System (ADS)

    Zhu, Rong; Hu, Xueying; Tang, Jiajun; Hu, Sheng

    2018-03-01

    Although image/video based fire recognition has received growing attention, an efficient and robust fire detection strategy is rarely explored. In this paper, we propose a novel approach to automatically identify the flame or smoke regions in an image. It is composed to three stages: (1) a block processing is applied to divide an image into several nonoverlapping image blocks, and these image blocks are identified as suspicious fire regions or not by using two color models and a color histogram-based similarity matching method in the HSV color space, (2) considering that compared to other information, the flame and smoke regions have significant visual characteristics, so that two kinds of image features are extracted for fire recognition, where local features are obtained based on the Scale Invariant Feature Transform (SIFT) descriptor and the Bags of Keypoints (BOK) technique, and texture features are extracted based on the Gray Level Co-occurrence Matrices (GLCM) and the Wavelet-based Analysis (WA) methods, and (3) a manifold learning-based classifier is constructed based on two image manifolds, which is designed via an improve Globular Neighborhood Locally Linear Embedding (GNLLE) algorithm, and the extracted hybrid features are used as input feature vectors to train the classifier, which is used to make decision for fire images or non fire images. Experiments and comparative analyses with four approaches are conducted on the collected image sets. The results show that the proposed approach is superior to the other ones in detecting fire and achieving a high recognition accuracy and a low error rate.

  9. A Power Conditioning Stage Based on Analog-Circuit MPPT Control and a Superbuck Converter for Thermoelectric Generators in Spacecraft Power Systems

    NASA Astrophysics Data System (ADS)

    Sun, Kai; Wu, Hongfei; Cai, Yan; Xing, Yan

    2014-06-01

    A thermoelectric generator (TEG) is a very important kind of power supply for spacecraft, especially for deep-space missions, due to its long lifetime and high reliability. To develop a practical TEG power supply for spacecraft, a power conditioning stage is indispensable, being employed to convert the varying output voltage of the TEG modules to a definite voltage for feeding batteries or loads. To enhance the system reliability, a power conditioning stage based on analog-circuit maximum-power-point tracking (MPPT) control and a superbuck converter is proposed in this paper. The input of this power conditioning stage is connected to the output of the TEG modules, and the output of this stage is connected to the battery and loads. The superbuck converter is employed as the main circuit, featuring low input current ripples and high conversion efficiency. Since for spacecraft power systems reliable operation is the key target for control circuits, a reset-set flip-flop-based analog circuit is used as the basic control circuit to implement MPPT, being much simpler than digital control circuits and offering higher reliability. Experiments have verified the feasibility and effectiveness of the proposed power conditioning stage. The results show the advantages of the proposed stage, such as maximum utilization of TEG power, small input ripples, and good stability.

  10. Optimization of Systems with Uncertainty: Initial Developments for Performance, Robustness and Reliability Based Designs

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Bushnell, Dennis M. (Technical Monitor)

    2002-01-01

    This paper presents a study on the optimization of systems with structured uncertainties, whose inputs and outputs can be exhaustively described in the probabilistic sense. By propagating the uncertainty from the input to the output in the space of the probability density functions and the moments, optimization problems that pursue performance, robustness and reliability based designs are studied. Be specifying the desired outputs in terms of desired probability density functions and then in terms of meaningful probabilistic indices, we settle a computationally viable framework for solving practical optimization problems. Applications to static optimization and stability control are used to illustrate the relevance of incorporating uncertainty in the early stages of the design. Several examples that admit a full probabilistic description of the output in terms of the design variables and the uncertain inputs are used to elucidate the main features of the generic problem and its solution. Extensions to problems that do not admit closed form solutions are also evaluated. Concrete evidence of the importance of using a consistent probabilistic formulation of the optimization problem and a meaningful probabilistic description of its solution is provided in the examples. In the stability control problem the analysis shows that standard deterministic approaches lead to designs with high probability of running into instability. The implementation of such designs can indeed have catastrophic consequences.

  11. Stereoscopic Feature Tracking System for Retrieving Velocity of Surface Waters

    NASA Astrophysics Data System (ADS)

    Zuniga Zamalloa, C. C.; Landry, B. J.

    2017-12-01

    The present work is concerned with the surface velocity retrieval of flows using a stereoscopic setup and finding the correspondence in the images via feature tracking (FT). The feature tracking provides a key benefit of substantially reducing the level of user input. In contrast to other commonly used methods (e.g., normalized cross-correlation), FT does not require the user to prescribe interrogation window sizes and removes the need for masking when specularities are present. The results of the current FT methodology are comparable to those obtained via Large Scale Particle Image Velocimetry while requiring little to no user input which allowed for rapid, automated processing of imagery.

  12. Vibration and stress analysis of soft-bonded shuttle insulation tiles. Modal analysis with compact widely space stringers

    NASA Technical Reports Server (NTRS)

    Ojalvo, I. U.; Austin, F.; Levy, A.

    1974-01-01

    An efficient iterative procedure is described for the vibration and modal stress analysis of reusable surface insulation (RSI) of multi-tiled space shuttle panels. The method, which is quite general, is rapidly convergent and highly useful for this application. A user-oriented computer program based upon this procedure and titled RESIST (REusable Surface Insulation Stresses) has been prepared for the analysis of compact, widely spaced, stringer-stiffened panels. RESIST, which uses finite element methods, obtains three dimensional tile stresses in the isolator, arrestor (if any) and RSI materials. Two dimensional stresses are obtained in the tile coating and the stringer-stiffened primary structure plate. A special feature of the program is that all the usual detailed finite element grid data is generated internally from a minimum of input data. The program can accommodate tile idealizations with up to 850 nodes (2550 degrees-of-freedom) and primary structure idealizations with a maximum of 10,000 degrees-of-freedom. The primary structure vibration capability is achieved through the development of a new rapid eigenvalue program named ALARM (Automatic LArge Reduction of Matrices to tridiagonal form).

  13. Intuitive Tools for the Design and Analysis of Communication Payloads for Satellites

    NASA Technical Reports Server (NTRS)

    Culver, Michael R.; Soong, Christine; Warner, Joseph D.

    2014-01-01

    In an effort to make future communications satellite payload design more efficient and accessible, two tools were created with intuitive graphical user interfaces (GUIs). The first tool allows payload designers to graphically design their payload by using simple drag and drop of payload components onto a design area within the program. Information about each picked component is pulled from a database of common space-qualified communication components sold by commerical companies. Once a design is completed, various reports can be generated, such as the Master Equipment List. The second tool is a link budget calculator designed specifically for ease of use. Other features of this tool include being able to access a database of NASA ground based apertures for near Earth and Deep Space communication, the Tracking and Data Relay Satellite System (TDRSS) base apertures, and information about the solar system relevant to link budget calculations. The link budget tool allows for over 50 different combinations of user inputs, eliminating the need for multiple spreadsheets and the user errors associated with using them. Both of the aforementioned tools increase the productivity of space communication systems designers, and have the colloquial latitude to allow non-communication experts to design preliminary communication payloads.

  14. Turbo-Brayton cryocooler technology for low-temperature space applications

    NASA Astrophysics Data System (ADS)

    Zagarola, Mark V.; Breedlove, Jeffrey F.; McCormick, John A.; Swift, Walter L.

    2003-03-01

    High performance, low temperature cryocoolers are being developed for future space-borne telescopes and instruments. To meet mission objectives, these coolers must be compact, lightweight, have low input power, operate reliably for 5-10 years, and produce no disturbances that would affect the pointing accuracy of the instruments. This paper describes progress in the development of turbo-Brayton cryocoolers addressing cooling in the 5 K to 20 K temperature range for loads of up to 300 mW. The key components for these cryocoolers are the miniature, high-speed turbomachines and the high performance recuperative heat exchangers. The turbomachines use gas-bearings to support the low mass, high speed rotors, resulting in negligible vibration and long life. Precision fabrication techniques are used to produce the necessary micro-scale geometric features that provide for high cycle efficiencies at these reduced sizes. Turbo-Brayton cryocoolers for higher temperatures and loads have been successfully developed for space applications. For efficient operation at low temperatures and capacities, advances in the core technologies have been pursued. Performance test results of a new, low poer compressor will be presented, and early cryogenic test results on a low temperature expansion turbine will be discussed. Projections for several low temperature cooler configurations are summarized.

  15. High Energy Astrophysics and Cosmology from Space: NASA's Physics of the Cosmos Program

    NASA Astrophysics Data System (ADS)

    Bautz, Marshall

    2017-01-01

    We summarize currently-funded NASA activities in high energy astrophysics and cosmology embodied in the NASA Physics of the Cosmos program, including updates on technology development and mission studies. The portfolio includes participation in a space mission to measure gravitational waves from a variety of astrophysical sources, including binary black holes, throughout most of cosmic history, and in another to map the evolution of black hole accretion by means of the accompanying X-ray emission. These missions are envisioned as collaborations with the European Space Agency's Large 3 (L3) and Athena programs, respectively. It also features definition of a large, NASA-led X-ray Observatory capable of tracing the surprisingly rapid growth of supermassive black holes during the first billion years of cosmic history. The program also includes the study of cosmic rays and high-energy gamma-ray photons resulting from range of physical processes, and efforts to characterize both the physics of inflation associated with the birth of the universe and the nature of the dark energy that dominates its mass-energy content today. Finally, we describe the activities of the Physics of the Cosmos Program Analysis Group, which serves as a forum for community analysis and input to NASA.

  16. Role Discovery in Graphs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2014-08-14

    RolX takes the features from Re-FeX or any other feature matrix as input and outputs role assignments (clusters). The output of RolX is a csv file containing the node-role memberships and a csv file containing the role-feature definitions.

  17. Built spaces and features associated with user satisfaction in maternity waiting homes in Malawi.

    PubMed

    McIntosh, Nathalie; Gruits, Patricia; Oppel, Eva; Shao, Amie

    2018-07-01

    To assess satisfaction with maternity waiting home built spaces and features in women who are at risk for underutilizing maternity waiting homes (i.e. residential facilities that temporarily house near-term pregnant mothers close to healthcare facilities that provide obstetrical care). Specifically we wanted to answer the questions: (1) Are built spaces and features associated with maternity waiting home user satisfaction? (2) Can built spaces and features designed to improve hygiene, comfort, privacy and function improve maternity waiting home user satisfaction? And (3) Which built spaces and features are most important for maternity waiting home user satisfaction? A cross-sectional study comparing satisfaction with standard and non-standard maternity waiting home designs. Between December 2016 and February 2017 we surveyed expectant mothers at two maternity waiting homes that differed in their design of built spaces and features. We used bivariate analyses to assess if built spaces and features were associated with satisfaction. We compared ratings of built spaces and features between the two maternity waiting homes using chi-squares and t-tests to assess if design features to improve hygiene, comfort, privacy and function were associated with higher satisfaction. We used exploratory robust regression analysis to examine the relationship between built spaces and features and maternity waiting home satisfaction. Two maternity waiting homes in Malawi, one that incorporated non-standardized design features to improve hygiene, comfort, privacy, and function (Kasungu maternity waiting home) and the other that had a standard maternity waiting home design (Dowa maternity waiting home). 322 expectant mothers at risk for underutilizing maternity waiting homes (i.e. first-time mothers and those with no pregnancy risk factors) who had stayed at the Kasungu or Dowa maternity waiting homes. There were significant differences in ratings of built spaces and features between the two differently designed maternity waiting homes, with the non-standard design having higher ratings for: adequacy of toilets, and ratings of heating/cooling, air and water quality, sanitation, toilets/showers and kitchen facilities, building maintenance, sleep area, private storage space, comfort level, outdoor spaces and overall satisfaction (p = <.0001 for all). The final regression model showed that built spaces and features that are most important for maternity waiting home user satisfaction are toilets/showers, guardian spaces, safety, building maintenance, sleep area and private storage space (R 2  = 0.28). The design of maternity waiting home built spaces and features is associated with user satisfaction in women at risk for underutilizing maternity waiting homes, especially related to toilets/showers, guardian spaces, safety, building maintenance, sleep area and private storage space. Improving maternity waiting home built spaces and features may offer a promising area for improving maternity waiting home satisfaction and reducing barriers to maternity waiting home use. Copyright © 2018 Elsevier Ltd. All rights reserved.

  18. Face recognition via Gabor and convolutional neural network

    NASA Astrophysics Data System (ADS)

    Lu, Tongwei; Wu, Menglu; Lu, Tao

    2018-04-01

    In recent years, the powerful feature learning and classification ability of convolutional neural network have attracted widely attention. Compared with the deep learning, the traditional machine learning algorithm has a good explanatory which deep learning does not have. Thus, In this paper, we propose a method to extract the feature of the traditional algorithm as the input of convolution neural network. In order to reduce the complexity of the network, the kernel function of Gabor wavelet is used to extract the feature from different position, frequency and direction of target image. It is sensitive to edge of image which can provide good direction and scale selection. The extraction of the image from eight directions on a scale are as the input of network that we proposed. The network have the advantage of weight sharing and local connection and texture feature of the input image can reduce the influence of facial expression, gesture and illumination. At the same time, we introduced a layer which combined the results of the pooling and convolution can extract deeper features. The training network used the open source caffe framework which is beneficial to feature extraction. The experiment results of the proposed method proved that the network structure effectively overcame the barrier of illumination and had a good robustness as well as more accurate and rapid than the traditional algorithm.

  19. Automated method for the systematic interpretation of resonance peaks in spectrum data

    DOEpatents

    Damiano, B.; Wood, R.T.

    1997-04-22

    A method is described for spectral signature interpretation. The method includes the creation of a mathematical model of a system or process. A neural network training set is then developed based upon the mathematical model. The neural network training set is developed by using the mathematical model to generate measurable phenomena of the system or process based upon model input parameter that correspond to the physical condition of the system or process. The neural network training set is then used to adjust internal parameters of a neural network. The physical condition of an actual system or process represented by the mathematical model is then monitored by extracting spectral features from measured spectra of the actual process or system. The spectral features are then input into said neural network to determine the physical condition of the system or process represented by the mathematical model. More specifically, the neural network correlates the spectral features (i.e. measurable phenomena) of the actual process or system with the corresponding model input parameters. The model input parameters relate to specific components of the system or process, and, consequently, correspond to the physical condition of the process or system. 1 fig.

  20. Automated method for the systematic interpretation of resonance peaks in spectrum data

    DOEpatents

    Damiano, Brian; Wood, Richard T.

    1997-01-01

    A method for spectral signature interpretation. The method includes the creation of a mathematical model of a system or process. A neural network training set is then developed based upon the mathematical model. The neural network training set is developed by using the mathematical model to generate measurable phenomena of the system or process based upon model input parameter that correspond to the physical condition of the system or process. The neural network training set is then used to adjust internal parameters of a neural network. The physical condition of an actual system or process represented by the mathematical model is then monitored by extracting spectral features from measured spectra of the actual process or system. The spectral features are then input into said neural network to determine the physical condition of the system or process represented by the mathematical. More specifically, the neural network correlates the spectral features (i.e. measurable phenomena) of the actual process or system with the corresponding model input parameters. The model input parameters relate to specific components of the system or process, and, consequently, correspond to the physical condition of the process or system.

  1. A boundary PDE feedback control approach for the stabilization of mortgage price dynamics

    NASA Astrophysics Data System (ADS)

    Rigatos, G.; Siano, P.; Sarno, D.

    2017-11-01

    Several transactions taking place in financial markets are dependent on the pricing of mortgages (loans for the purchase of residences, land or farms). In this article, a method for stabilization of mortgage price dynamics is developed. It is considered that mortgage prices follow a PDE model which is equivalent to a multi-asset Black-Scholes PDE. Actually it is a diffusion process evolving in a 2D assets space, where the first asset is the house price and the second asset is the interest rate. By applying semi-discretization and a finite differences scheme this multi-asset PDE is transformed into a state-space model consisting of ordinary nonlinear differential equations. For the local subsystems, into which the mortgage PDE is decomposed, it becomes possible to apply boundary-based feedback control. The controller design proceeds by showing that the state-space model of the mortgage price PDE stands for a differentially flat system. Next, for each subsystem which is related to a nonlinear ODE, a virtual control input is computed, that can invert the subsystem's dynamics and can eliminate the subsystem's tracking error. From the last row of the state-space description, the control input (boundary condition) that is actually applied to the multi-factor mortgage price PDE system is found. This control input contains recursively all virtual control inputs which were computed for the individual ODE subsystems associated with the previous rows of the state-space equation. Thus, by tracing the rows of the state-space model backwards, at each iteration of the control algorithm, one can finally obtain the control input that should be applied to the mortgage price PDE system so as to assure that all its state variables will converge to the desirable setpoints. By showing the feasibility of such a control method it is also proven that through selected modification of the PDE boundary conditions the price of the mortgage can be made to converge and stabilize at specific reference values.

  2. Image classification independent of orientation and scale

    NASA Astrophysics Data System (ADS)

    Arsenault, Henri H.; Parent, Sebastien; Moisan, Sylvain

    1998-04-01

    The recognition of targets independently of orientation has become fairly well developed in recent years for in-plane rotation. The out-of-plane rotation problem is much less advanced. When both out-of-plane rotations and changes of scale are present, the problem becomes very difficult. In this paper we describe our research on the combined out-of- plane rotation problem and the scale invariance problem. The rotations were limited to rotations about an axis perpendicular to the line of sight. The objects to be classified were three kinds of military vehicles. The inputs used were infrared imagery and photographs. We used a variation of a method proposed by Neiberg and Casasent, where a neural network is trained with a subset of the database and a minimum distances from lines in feature space are used for classification instead of nearest neighbors. Each line in the feature space corresponds to one class of objects, and points on one line correspond to different orientations of the same target. We found that the training samples needed to be closer for some orientations than for others, and that the most difficult orientations are where the target is head-on to the observer. By means of some additional training of the neural network, we were able to achieve 100% correct classification for 360 degree rotation and a range of scales over a factor of five.

  3. Bounding the errors for convex dynamics on one or more polytopes.

    PubMed

    Tresser, Charles

    2007-09-01

    We discuss the greedy algorithm for approximating a sequence of inputs in a family of polytopes lying in affine spaces by an output sequence made of vertices of the respective polytopes. More precisely, we consider here the case when the greed of the algorithm is dictated by the Euclidean norms of the successive cumulative errors. This algorithm can be interpreted as a time-dependent dynamical system in the vector space, where the errors live, or as a time-dependent dynamical system in an affine space containing copies of all the original polytopes. This affine space contains the inputs, as well as the inputs modified by adding the respective former errors; it is the evolution of these modified inputs that the dynamical system in affine space describes. Scheduling problems with many polytopes arise naturally, for instance, when the inputs are from a single polytope P, but one imposes the constraint that whenever the input belongs to a codimension n face, the output has to be in the same codimension n face (as when scheduling drivers among participants of a carpool). It has been previously shown that the error is bounded in the case of a single polytope by proving the existence of an arbitrary large convex invariant region for the dynamics in affine space: A region that is simultaneously invariant for several polytopes, each considered separately, was also constructed. It was then shown that there cannot be an invariant region in affine space in the general case of a family of polytopes. Here we prove the existence of an arbitrary large convex invariant set for the dynamics in the vector space in the case when the sizes of the polytopes in the family are bounded and the set of all the outgoing normals to all the faces of all the polytopes is finite. It was also previously known that starting from zero as the initial error set, the error set could not be saturated in finitely many steps in some cases with several polytopes: Contradicting a former conjecture, we show that the same happens for some single quadrilaterals and for a single pentagon with an axial symmetry. The disproof of that conjecture is the new piece of information that leads us to expect, and then to verify, as we recount here, that the proof that the errors are bounded in the general case could be a small step beyond the proof of the same statement for the single polytope case.

  4. Bounding the errors for convex dynamics on one or more polytopes

    NASA Astrophysics Data System (ADS)

    Tresser, Charles

    2007-09-01

    We discuss the greedy algorithm for approximating a sequence of inputs in a family of polytopes lying in affine spaces by an output sequence made of vertices of the respective polytopes. More precisely, we consider here the case when the greed of the algorithm is dictated by the Euclidean norms of the successive cumulative errors. This algorithm can be interpreted as a time-dependent dynamical system in the vector space, where the errors live, or as a time-dependent dynamical system in an affine space containing copies of all the original polytopes. This affine space contains the inputs, as well as the inputs modified by adding the respective former errors; it is the evolution of these modified inputs that the dynamical system in affine space describes. Scheduling problems with many polytopes arise naturally, for instance, when the inputs are from a single polytope P, but one imposes the constraint that whenever the input belongs to a codimension n face, the output has to be in the same codimension n face (as when scheduling drivers among participants of a carpool). It has been previously shown that the error is bounded in the case of a single polytope by proving the existence of an arbitrary large convex invariant region for the dynamics in affine space: A region that is simultaneously invariant for several polytopes, each considered separately, was also constructed. It was then shown that there cannot be an invariant region in affine space in the general case of a family of polytopes. Here we prove the existence of an arbitrary large convex invariant set for the dynamics in the vector space in the case when the sizes of the polytopes in the family are bounded and the set of all the outgoing normals to all the faces of all the polytopes is finite. It was also previously known that starting from zero as the initial error set, the error set could not be saturated in finitely many steps in some cases with several polytopes: Contradicting a former conjecture, we show that the same happens for some single quadrilaterals and for a single pentagon with an axial symmetry. The disproof of that conjecture is the new piece of information that leads us to expect, and then to verify, as we recount here, that the proof that the errors are bounded in the general case could be a small step beyond the proof of the same statement for the single polytope case.

  5. A grid spacing control technique for algebraic grid generation methods

    NASA Technical Reports Server (NTRS)

    Smith, R. E.; Kudlinski, R. A.; Everton, E. L.

    1982-01-01

    A technique which controls the spacing of grid points in algebraically defined coordinate transformations is described. The technique is based on the generation of control functions which map a uniformly distributed computational grid onto parametric variables defining the physical grid. The control functions are smoothed cubic splines. Sets of control points are input for each coordinate directions to outline the control functions. Smoothed cubic spline functions are then generated to approximate the input data. The technique works best in an interactive graphics environment where control inputs and grid displays are nearly instantaneous. The technique is illustrated with the two-boundary grid generation algorithm.

  6. Influence of Visual Prism Adaptation on Auditory Space Representation.

    PubMed

    Pochopien, Klaudia; Fahle, Manfred

    2017-01-01

    Prisms shifting the visual input sideways produce a mismatch between the visual versus felt position of one's hand. Prism adaptation eliminates this mismatch, realigning hand proprioception with visual input. Whether this realignment concerns exclusively the visuo-(hand)motor system or it generalizes to acoustic inputs is controversial. We here show that there is indeed a slight influence of visual adaptation on the perceived direction of acoustic sources. However, this shift in perceived auditory direction can be fully explained by a subconscious head rotation during prism exposure and by changes in arm proprioception. Hence, prism adaptation does only indirectly generalize to auditory space perception.

  7. Microwave limb sounder. [measuring trace gases in the upper atmosphere

    NASA Technical Reports Server (NTRS)

    Gustincic, J. J. (Inventor)

    1981-01-01

    Trace gases in the upper atmosphere can be measured by comparing spectral noise content of limb soundings with the spectral noise content of cold space. An offset Cassegrain antenna system and tiltable input mirror alternately look out at the limb and up at cold space at an elevation angle of about 22. The mirror can also be tilted to look at a black body calibration target. Reflection from the mirror is directed into a radiometer whose head functions as a diplexer to combine the input radiation and a local ocillator (klystron) beam. The radiometer head is comprised of a Fabry-Perot resonator consisting of two Fabry-Perot cavities spaced a number of half wavelengths apart. Incoming radiation received on one side is reflected and rotated 90 deg in polarization by the resonator so that it will be reflected by an input grid into a mixer, while the klystron beam received on the other side is also reflected and rotated 90 deg, but not without passing some energy to be reflected by the input grid into the mixer.

  8. FSH: fast spaced seed hashing exploiting adjacent hashes.

    PubMed

    Girotto, Samuele; Comin, Matteo; Pizzi, Cinzia

    2018-01-01

    Patterns with wildcards in specified positions, namely spaced seeds , are increasingly used instead of k -mers in many bioinformatics applications that require indexing, querying and rapid similarity search, as they can provide better sensitivity. Many of these applications require to compute the hashing of each position in the input sequences with respect to the given spaced seed, or to multiple spaced seeds. While the hashing of k -mers can be rapidly computed by exploiting the large overlap between consecutive k -mers, spaced seeds hashing is usually computed from scratch for each position in the input sequence, thus resulting in slower processing. The method proposed in this paper, fast spaced-seed hashing (FSH), exploits the similarity of the hash values of spaced seeds computed at adjacent positions in the input sequence. In our experiments we compute the hash for each positions of metagenomics reads from several datasets, with respect to different spaced seeds. We also propose a generalized version of the algorithm for the simultaneous computation of multiple spaced seeds hashing. In the experiments, our algorithm can compute the hashing values of spaced seeds with a speedup, with respect to the traditional approach, between 1.6[Formula: see text] to 5.3[Formula: see text], depending on the structure of the spaced seed. Spaced seed hashing is a routine task for several bioinformatics application. FSH allows to perform this task efficiently and raise the question of whether other hashing can be exploited to further improve the speed up. This has the potential of major impact in the field, making spaced seed applications not only accurate, but also faster and more efficient. The software FSH is freely available for academic use at: https://bitbucket.org/samu661/fsh/overview.

  9. Convergent evidence for hierarchical prediction networks from human electrocorticography and magnetoencephalography.

    PubMed

    Phillips, Holly N; Blenkmann, Alejandro; Hughes, Laura E; Kochen, Silvia; Bekinschtein, Tristan A; Cam-Can; Rowe, James B

    2016-09-01

    We propose that sensory inputs are processed in terms of optimised predictions and prediction error signals within hierarchical neurocognitive models. The combination of non-invasive brain imaging and generative network models has provided support for hierarchical frontotemporal interactions in oddball tasks, including recent identification of a temporal expectancy signal acting on prefrontal cortex. However, these studies are limited by the need to invert magnetoencephalographic or electroencephalographic sensor signals to localise activity from cortical 'nodes' in the network, or to infer neural responses from indirect measures such as the fMRI BOLD signal. To overcome this limitation, we examined frontotemporal interactions estimated from direct cortical recordings from two human participants with cortical electrode grids (electrocorticography - ECoG). Their frontotemporal network dynamics were compared to those identified by magnetoencephalography (MEG) in forty healthy adults. All participants performed the same auditory oddball task with standard tones interspersed with five deviant tone types. We normalised post-operative electrode locations to standardised anatomic space, to compare across modalities, and inverted the MEG to cortical sources using the estimated lead field from subject-specific head models. A mismatch negativity signal in frontal and temporal cortex was identified in all subjects. Generative models of the electrocorticographic and magnetoencephalographic data were separately compared using the free-energy estimate of the model evidence. Model comparison confirmed the same critical features of hierarchical frontotemporal networks in each patient as in the group-wise MEG analysis. These features included bilateral, feedforward and feedback frontotemporal modulated connectivity, in addition to an asymmetric expectancy driving input on left frontal cortex. The invasive ECoG provides an important step in construct validation of the use of neural generative models of MEG, which in turn enables generalisation to larger populations. Together, they give convergent evidence for the hierarchical interactions in frontotemporal networks for expectation and processing of sensory inputs. Crown Copyright © 2016. Published by Elsevier Ltd. All rights reserved.

  10. Cardiac arrhythmia beat classification using DOST and PSO tuned SVM.

    PubMed

    Raj, Sandeep; Ray, Kailash Chandra; Shankar, Om

    2016-11-01

    The increase in the number of deaths due to cardiovascular diseases (CVDs) has gained significant attention from the study of electrocardiogram (ECG) signals. These ECG signals are studied by the experienced cardiologist for accurate and proper diagnosis, but it becomes difficult and time-consuming for long-term recordings. Various signal processing techniques are studied to analyze the ECG signal, but they bear limitations due to the non-stationary behavior of ECG signals. Hence, this study aims to improve the classification accuracy rate and provide an automated diagnostic solution for the detection of cardiac arrhythmias. The proposed methodology consists of four stages, i.e. filtering, R-peak detection, feature extraction and classification stages. In this study, Wavelet based approach is used to filter the raw ECG signal, whereas Pan-Tompkins algorithm is used for detecting the R-peak inside the ECG signal. In the feature extraction stage, discrete orthogonal Stockwell transform (DOST) approach is presented for an efficient time-frequency representation (i.e. morphological descriptors) of a time domain signal and retains the absolute phase information to distinguish the various non-stationary behavior ECG signals. Moreover, these morphological descriptors are further reduced in lower dimensional space by using principal component analysis and combined with the dynamic features (i.e based on RR-interval of the ECG signals) of the input signal. This combination of two different kinds of descriptors represents each feature set of an input signal that is utilized for classification into subsequent categories by employing PSO tuned support vector machines (SVM). The proposed methodology is validated on the baseline MIT-BIH arrhythmia database and evaluated under two assessment schemes, yielding an improved overall accuracy of 99.18% for sixteen classes in the category-based and 89.10% for five classes (mapped according to AAMI standard) in the patient-based assessment scheme respectively to the state-of-art diagnosis. The results reported are further compared to the existing methodologies in literature. The proposed feature representation of cardiac signals based on symmetrical features along with PSO based optimization technique for the SVM classifier reported an improved classification accuracy in both the assessment schemes evaluated on the benchmark MIT-BIH arrhythmia database and hence can be utilized for automated computer-aided diagnosis of cardiac arrhythmia beats. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  11. AQUATOX Features and Tools

    EPA Pesticide Factsheets

    Numerous features have been included to facilitate the modeling process, from model setup and data input, presentation and analysis of results, to easy export of results to spreadsheet programs for additional analysis.

  12. Tele-Autonomous control involving contact. Final Report Thesis; [object localization

    NASA Technical Reports Server (NTRS)

    Shao, Lejun; Volz, Richard A.; Conway, Lynn; Walker, Michael W.

    1990-01-01

    Object localization and its application in tele-autonomous systems are studied. Two object localization algorithms are presented together with the methods of extracting several important types of object features. The first algorithm is based on line-segment to line-segment matching. Line range sensors are used to extract line-segment features from an object. The extracted features are matched to corresponding model features to compute the location of the object. The inputs of the second algorithm are not limited only to the line features. Featured points (point to point matching) and featured unit direction vectors (vector to vector matching) can also be used as the inputs of the algorithm, and there is no upper limit on the number of the features inputed. The algorithm will allow the use of redundant features to find a better solution. The algorithm uses dual number quaternions to represent the position and orientation of an object and uses the least squares optimization method to find an optimal solution for the object's location. The advantage of using this representation is that the method solves for the location estimation by minimizing a single cost function associated with the sum of the orientation and position errors and thus has a better performance on the estimation, both in accuracy and speed, than that of other similar algorithms. The difficulties when the operator is controlling a remote robot to perform manipulation tasks are also discussed. The main problems facing the operator are time delays on the signal transmission and the uncertainties of the remote environment. How object localization techniques can be used together with other techniques such as predictor display and time desynchronization to help to overcome these difficulties are then discussed.

  13. Software-hardware complex for the input of telemetric information obtained from rocket studies of the radiation of the earth's upper atmosphere

    NASA Astrophysics Data System (ADS)

    Bazdrov, I. I.; Bortkevich, V. S.; Khokhlov, V. N.

    2004-10-01

    This paper describes a software-hardware complex for the input into a personal computer of telemetric information obtained by means of telemetry stations TRAL KR28, RTS-8, and TRAL K2N. Structural and functional diagrams are given of the input device and the hardware complex. Results that characterize the features of the input process and selective data of optical measurements of atmospheric radiation are given. © 2004

  14. Knowledge-based fragment binding prediction.

    PubMed

    Tang, Grace W; Altman, Russ B

    2014-04-01

    Target-based drug discovery must assess many drug-like compounds for potential activity. Focusing on low-molecular-weight compounds (fragments) can dramatically reduce the chemical search space. However, approaches for determining protein-fragment interactions have limitations. Experimental assays are time-consuming, expensive, and not always applicable. At the same time, computational approaches using physics-based methods have limited accuracy. With increasing high-resolution structural data for protein-ligand complexes, there is now an opportunity for data-driven approaches to fragment binding prediction. We present FragFEATURE, a machine learning approach to predict small molecule fragments preferred by a target protein structure. We first create a knowledge base of protein structural environments annotated with the small molecule substructures they bind. These substructures have low-molecular weight and serve as a proxy for fragments. FragFEATURE then compares the structural environments within a target protein to those in the knowledge base to retrieve statistically preferred fragments. It merges information across diverse ligands with shared substructures to generate predictions. Our results demonstrate FragFEATURE's ability to rediscover fragments corresponding to the ligand bound with 74% precision and 82% recall on average. For many protein targets, it identifies high scoring fragments that are substructures of known inhibitors. FragFEATURE thus predicts fragments that can serve as inputs to fragment-based drug design or serve as refinement criteria for creating target-specific compound libraries for experimental or computational screening.

  15. Knowledge-based Fragment Binding Prediction

    PubMed Central

    Tang, Grace W.; Altman, Russ B.

    2014-01-01

    Target-based drug discovery must assess many drug-like compounds for potential activity. Focusing on low-molecular-weight compounds (fragments) can dramatically reduce the chemical search space. However, approaches for determining protein-fragment interactions have limitations. Experimental assays are time-consuming, expensive, and not always applicable. At the same time, computational approaches using physics-based methods have limited accuracy. With increasing high-resolution structural data for protein-ligand complexes, there is now an opportunity for data-driven approaches to fragment binding prediction. We present FragFEATURE, a machine learning approach to predict small molecule fragments preferred by a target protein structure. We first create a knowledge base of protein structural environments annotated with the small molecule substructures they bind. These substructures have low-molecular weight and serve as a proxy for fragments. FragFEATURE then compares the structural environments within a target protein to those in the knowledge base to retrieve statistically preferred fragments. It merges information across diverse ligands with shared substructures to generate predictions. Our results demonstrate FragFEATURE's ability to rediscover fragments corresponding to the ligand bound with 74% precision and 82% recall on average. For many protein targets, it identifies high scoring fragments that are substructures of known inhibitors. FragFEATURE thus predicts fragments that can serve as inputs to fragment-based drug design or serve as refinement criteria for creating target-specific compound libraries for experimental or computational screening. PMID:24762971

  16. Epileptic seizure detection in EEG signal using machine learning techniques.

    PubMed

    Jaiswal, Abeg Kumar; Banka, Haider

    2018-03-01

    Epilepsy is a well-known nervous system disorder characterized by seizures. Electroencephalograms (EEGs), which capture brain neural activity, can detect epilepsy. Traditional methods for analyzing an EEG signal for epileptic seizure detection are time-consuming. Recently, several automated seizure detection frameworks using machine learning technique have been proposed to replace these traditional methods. The two basic steps involved in machine learning are feature extraction and classification. Feature extraction reduces the input pattern space by keeping informative features and the classifier assigns the appropriate class label. In this paper, we propose two effective approaches involving subpattern based PCA (SpPCA) and cross-subpattern correlation-based PCA (SubXPCA) with Support Vector Machine (SVM) for automated seizure detection in EEG signals. Feature extraction was performed using SpPCA and SubXPCA. Both techniques explore the subpattern correlation of EEG signals, which helps in decision-making process. SVM is used for classification of seizure and non-seizure EEG signals. The SVM was trained with radial basis kernel. All the experiments have been carried out on the benchmark epilepsy EEG dataset. The entire dataset consists of 500 EEG signals recorded under different scenarios. Seven different experimental cases for classification have been conducted. The classification accuracy was evaluated using tenfold cross validation. The classification results of the proposed approaches have been compared with the results of some of existing techniques proposed in the literature to establish the claim.

  17. Orbiter multiplexer-demultiplexer (MDM)/Space Lab Bus Interface Unit (SL/BIU) serial data interface evaluation, volume 2

    NASA Technical Reports Server (NTRS)

    Tobey, G. L.

    1978-01-01

    Tests were performed to evaluate the operating characteristics of the interface between the Space Lab Bus Interface Unit (SL/BIU) and the Orbiter Multiplexer-Demultiplexer (MDM) serial data input-output (SIO) module. This volume contains the test equipment preparation procedures and a detailed description of the Nova/Input Output Processor Simulator (IOPS) software used during the data transfer tests to determine word error rates (WER).

  18. Realistic Covariance Prediction for the Earth Science Constellation

    NASA Technical Reports Server (NTRS)

    Duncan, Matthew; Long, Anne

    2006-01-01

    Routine satellite operations for the Earth Science Constellation (ESC) include collision risk assessment between members of the constellation and other orbiting space objects. One component of the risk assessment process is computing the collision probability between two space objects. The collision probability is computed using Monte Carlo techniques as well as by numerically integrating relative state probability density functions. Each algorithm takes as inputs state vector and state vector uncertainty information for both objects. The state vector uncertainty information is expressed in terms of a covariance matrix. The collision probability computation is only as good as the inputs. Therefore, to obtain a collision calculation that is a useful decision-making metric, realistic covariance matrices must be used as inputs to the calculation. This paper describes the process used by the NASA/Goddard Space Flight Center's Earth Science Mission Operations Project to generate realistic covariance predictions for three of the Earth Science Constellation satellites: Aqua, Aura and Terra.

  19. Computer vision-based method for classification of wheat grains using artificial neural network.

    PubMed

    Sabanci, Kadir; Kayabasi, Ahmet; Toktas, Abdurrahim

    2017-06-01

    A simplified computer vision-based application using artificial neural network (ANN) depending on multilayer perceptron (MLP) for accurately classifying wheat grains into bread or durum is presented. The images of 100 bread and 100 durum wheat grains are taken via a high-resolution camera and subjected to pre-processing. The main visual features of four dimensions, three colors and five textures are acquired using image-processing techniques (IPTs). A total of 21 visual features are reproduced from the 12 main features to diversify the input population for training and testing the ANN model. The data sets of visual features are considered as input parameters of the ANN model. The ANN with four different input data subsets is modelled to classify the wheat grains into bread or durum. The ANN model is trained with 180 grains and its accuracy tested with 20 grains from a total of 200 wheat grains. Seven input parameters that are most effective on the classifying results are determined using the correlation-based CfsSubsetEval algorithm to simplify the ANN model. The results of the ANN model are compared in terms of accuracy rate. The best result is achieved with a mean absolute error (MAE) of 9.8 × 10 -6 by the simplified ANN model. This shows that the proposed classifier based on computer vision can be successfully exploited to automatically classify a variety of grains. © 2016 Society of Chemical Industry. © 2016 Society of Chemical Industry.

  20. Tensor Rank Preserving Discriminant Analysis for Facial Recognition.

    PubMed

    Tao, Dapeng; Guo, Yanan; Li, Yaotang; Gao, Xinbo

    2017-10-12

    Facial recognition, one of the basic topics in computer vision and pattern recognition, has received substantial attention in recent years. However, for those traditional facial recognition algorithms, the facial images are reshaped to a long vector, thereby losing part of the original spatial constraints of each pixel. In this paper, a new tensor-based feature extraction algorithm termed tensor rank preserving discriminant analysis (TRPDA) for facial image recognition is proposed; the proposed method involves two stages: in the first stage, the low-dimensional tensor subspace of the original input tensor samples was obtained; in the second stage, discriminative locality alignment was utilized to obtain the ultimate vector feature representation for subsequent facial recognition. On the one hand, the proposed TRPDA algorithm fully utilizes the natural structure of the input samples, and it applies an optimization criterion that can directly handle the tensor spectral analysis problem, thereby decreasing the computation cost compared those traditional tensor-based feature selection algorithms. On the other hand, the proposed TRPDA algorithm extracts feature by finding a tensor subspace that preserves most of the rank order information of the intra-class input samples. Experiments on the three facial databases are performed here to determine the effectiveness of the proposed TRPDA algorithm.

  1. Identification and Description of Alternative Means of Accomplishing IMS Operational Features.

    ERIC Educational Resources Information Center

    Dave, Ashok

    The operational features of feasible alternative configurations for a computer-based instructional management system are identified. Potential alternative means and components of accomplishing these features are briefly described. Included are aspects of data collection, data input, data transmission, data reception, scanning and processing,…

  2. Quantity and Quality of Caregivers' Linguistic Input to 18-Month and 3-Year-Old Children Who Are Hard of Hearing.

    PubMed

    Ambrose, Sophie E; Walker, Elizabeth A; Unflat-Berry, Lauren M; Oleson, Jacob J; Moeller, Mary Pat

    2015-01-01

    The primary objective of this study was to examine the quantity and quality of caregiver talk directed to children who are hard of hearing (CHH) compared with children with normal hearing (CNH). For the CHH only, the study explored how caregiver input changed as a function of child age (18 months versus 3 years), which child and family factors contributed to variance in caregiver linguistic input at 18 months and 3 years, and how caregiver talk at 18 months related to child language outcomes at 3 years. Participants were 59 CNH and 156 children with bilateral, mild-to-severe hearing loss. When children were approximately 18 months and/or 3 years of age, caregivers and children participated in a 5-min semistructured, conversational interaction. Interactions were transcribed and coded for two features of caregiver input representing quantity (number of total utterances and number of total words) and four features representing quality (number of different words, mean length of utterance in morphemes, proportion of utterances that were high level, and proportion of utterances that were directing). In addition, at the 18-month visit, parents completed a standardized questionnaire regarding their child's communication development. At the 3-year visit, a clinician administered a standardized language measure. At the 18-month visit, the CHH were exposed to a greater proportion of directing utterances than the CNH. At the 3-year visit, there were significant differences between the CNH and CHH for number of total words and all four of the quality variables, with the CHH being exposed to fewer words and lower quality input. Caregivers generally provided higher quality input to CHH at the 3-year visit compared with the 18-month visit. At the 18-month visit, quantity variables, but not quality variables, were related to several child and family factors. At the 3-year visit, the variable most strongly related to caregiver input was child language. Longitudinal analyses indicated that quality, but not quantity, of caregiver linguistic input at 18 months was related to child language abilities at 3 years, with directing utterances accounting for significant unique variance in child language outcomes. Although caregivers of CHH increased their use of quality features of linguistic input over time, the differences when compared with CNH suggest that some caregivers may need additional support to provide their children with optimal language learning environments. This is particularly important given the relationships that were identified between quality features of caregivers' linguistic input and children's language abilities. Family supports should include a focus on developing a style that is conversational eliciting as opposed to directive.

  3. The shape of facial features and the spacing among them generate similar inversion effects: a reply to Rossion (2008).

    PubMed

    Yovel, Galit

    2009-11-01

    It is often argued that picture-plane face inversion impairs discrimination of the spacing among face features to a greater extent than the identity of the facial features. However, several recent studies have reported similar inversion effects for both types of face manipulations. In a recent review, Rossion (2008) claimed that similar inversion effects for spacing and features are due to methodological and conceptual shortcomings and that data still support the idea that inversion impairs the discrimination of features less than that of the spacing among them. Here I will claim that when facial features differ primarily in shape, the effect of inversion on features is not smaller than the one on spacing. It is when color/contrast information is added to facial features that the inversion effect on features decreases. This obvious observation accounts for the discrepancy in the literature and suggests that the large inversion effect that was found for features that differ in shape is not a methodological artifact. These findings together with other data that are discussed are consistent with the idea that the shape of facial features and the spacing among them are integrated rather than dissociated in the holistic representation of faces.

  4. Transport of a Power Plant Tracer Plume over Grand Canyon National Park.

    NASA Astrophysics Data System (ADS)

    Chen, Jun; Bornstein, Robert; Lindsey, Charles G.

    1999-08-01

    Meteorological and air-quality data, as well as surface tracer concentration values, were collected during 1990 to assess the impacts of Navajo Generating Station (NGS) emissions on Grand Canyon National Park (GCNP) air quality. These data have been used in the present investigation to determine between direct and indirect transport routes taken by the NGS plume to produce measured high-tracer concentration events at GCNP.The meteorological data were used as input into a three-dimensional mass-consistent wind model, whose output was used as input into a horizontal forward-trajectory model. Calculated polluted air locations were compared with observed surface-tracer concentration values.Results show that complex-terrain features affect local wind-flow patterns during winter in the Grand Canyon area. Local channeling, decoupled canyon winds, and slope and valley flows dominate in the region when synoptic systems are weak. Direct NGS plume transport to GCNP occurs with northeasterly plume-height winds, while indirect transport to the park is caused by wind direction shifts associated with passing synoptic systems. Calculated polluted airmass positions along the modeled streak lines match measured surface-tracer observations in both space and time.

  5. Connectivity strategies for higher-order neural networks applied to pattern recognition

    NASA Technical Reports Server (NTRS)

    Spirkovska, Lilly; Reid, Max B.

    1990-01-01

    Different strategies for non-fully connected HONNs (higher-order neural networks) are discussed, showing that by using such strategies an input field of 128 x 128 pixels can be attained while still achieving in-plane rotation and translation-invariant recognition. These techniques allow HONNs to be used with the larger input scenes required for practical pattern-recognition applications. The number of interconnections that must be stored has been reduced by a factor of approximately 200,000 in a T/C case and about 2000 in a Space Shuttle/F-18 case by using regional connectivity. Third-order networks have been simulated using several connection strategies. The method found to work best is regional connectivity. The main advantages of this strategy are the following: (1) it considers features of various scales within the image and thus gets a better sample of what the image looks like; (2) it is invariant to shape-preserving geometric transformations, such as translation and rotation; (3) the connections are predetermined so that no extra computations are necessary during run time; and (4) it does not require any extra storage for recording which connections were formed.

  6. Investigation into the influence of laser energy input on selective laser melted thin-walled parts by response surface method

    NASA Astrophysics Data System (ADS)

    Liu, Yang; Zhang, Jian; Pang, Zhicong; Wu, Weihui

    2018-04-01

    Selective laser melting (SLM) provides a feasible way for manufacturing of complex thin-walled parts directly, however, the energy input during SLM process, namely derived from the laser power, scanning speed, layer thickness and scanning space, etc. has great influence on the thin wall's qualities. The aim of this work is to relate the thin wall's parameters (responses), namely track width, surface roughness and hardness to the process parameters considered in this research (laser power, scanning speed and layer thickness) and to find out the optimal manufacturing conditions. Design of experiment (DoE) was used by implementing composite central design to achieve better manufacturing qualities. Mathematical models derived from the statistical analysis were used to establish the relationships between the process parameters and the responses. Also, the effects of process parameters on each response were determined. Then, a numerical optimization was performed to find out the optimal process set at which the quality features are at their desired values. Based on this study, the relationship between process parameters and SLMed thin-walled structure was revealed and thus, the corresponding optimal process parameters can be used to manufactured thin-walled parts with high quality.

  7. Final report : PATTON Alliance gazetteer evaluation project.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bleakly, Denise Rae

    2007-08-01

    In 2005 the National Ground Intelligence Center (NGIC) proposed that the PATTON Alliance provide assistance in evaluating and obtaining the Integrated Gazetteer Database (IGDB), developed for the Naval Space Warfare Command Research group (SPAWAR) under Advance Research and Development Activity (ARDA) funds by MITRE Inc., fielded to the text-based search tool GeoLocator, currently in use by NGIC. We met with the developers of GeoLocator and identified their requirements for a better gazetteer. We then validated those requirements by reviewing the technical literature, meeting with other members of the intelligence community (IC), and talking with both the United States Geologic Surveymore » (USGS) and the National Geospatial Intelligence Agency (NGA), the authoritative sources for official geographic name information. We thus identified 12 high-level requirements from users and the broader intelligence community. The IGDB satisfies many of these requirements. We identified gaps and proposed ways of closing these gaps. Three important needs have not been addressed but are critical future needs for the broader intelligence community. These needs include standardization of gazetteer data, a web feature service for gazetteer information that is maintained by NGA and USGS but accessible to users, and a common forum that brings together IC stakeholders and federal agency representatives to provide input to these activities over the next several years. Establishing a robust gazetteer web feature service that is available to all IC users may go a long way toward resolving the gazetteer needs within the IC. Without a common forum to provide input and feedback, community adoption may take significantly longer than anticipated with resulting risks to the war fighter.« less

  8. Connecting the Pioneers, Current Leaders and the Nature and History of Space Weather with K-12 Classrooms and the General Public

    NASA Astrophysics Data System (ADS)

    Ng, C.; Thompson, B. J.; Cline, T.; Lewis, E.; Barbier, B.; Odenwald, S.; Spadaccini, J.; James, N.; Stephenson, B.; Davis, H. B.; Major, E. R.; Space Weather Living History

    2011-12-01

    The Space Weather Living History program will explore and share the breakthrough new science and captivating stories of space environments and space weather by interviewing space physics pioneers and leaders active from the International Geophysical Year (IGY) to the present. Our multi-mission project will capture, document and preserve the living history of space weather utilizing original historical materials (primary sources). The resulting products will allow us to tell the stories of those involved in interactive new media to address important STEM needs, inspire the next generation of explorers, and feature women as role models. The project is divided into several stages, and the first stage, which began in mid-2011, focuses on resource gathering. The goal is to capture not just anecdotes, but the careful analogies and insights of researchers and historians associated with the programs and events. The Space Weather Living History Program has a Scientific Advisory Board, and with the Board's input our team will determine the chronology, key researchers, events, missions and discoveries for interviews. Education activities will be designed to utilize autobiographies, newspapers, interviews, research reports, journal articles, conference proceedings, dissertations, websites, diaries, letters, and artworks. With the help of a multimedia firm, we will use some of these materials to develop an interactive timeline on the web, and as a downloadable application in a kiosk and on tablet computers. In summary, our project augments the existing historical records with education technologies, connect the pioneers, current leaders and the nature and history of space weather with K-12 classrooms and the general public, covering all areas of studies in Heliophysics. The project is supported by NASA award NNX11AJ61G.

  9. Shift-, rotation-, and scale-invariant shape recognition system using an optical Hough transform

    NASA Astrophysics Data System (ADS)

    Schmid, Volker R.; Bader, Gerhard; Lueder, Ernst H.

    1998-02-01

    We present a hybrid shape recognition system with an optical Hough transform processor. The features of the Hough space offer a separate cancellation of distortions caused by translations and rotations. Scale invariance is also provided by suitable normalization. The proposed system extends the capabilities of Hough transform based detection from only straight lines to areas bounded by edges. A very compact optical design is achieved by a microlens array processor accepting incoherent light as direct optical input and realizing the computationally expensive connections massively parallel. Our newly developed algorithm extracts rotation and translation invariant normalized patterns of bright spots on a 2D grid. A neural network classifier maps the 2D features via a nonlinear hidden layer onto the classification output vector. We propose initialization of the connection weights according to regions of activity specifically assigned to each neuron in the hidden layer using a competitive network. The presented system is designed for industry inspection applications. Presently we have demonstrated detection of six different machined parts in real-time. Our method yields very promising detection results of more than 96% correctly classified parts.

  10. Integrated Budget Office Toolbox

    NASA Technical Reports Server (NTRS)

    Rushing, Douglas A.; Blakeley, Chris; Chapman, Gerry; Robertson, Bill; Horton, Allison; Besser, Thomas; McCarthy, Debbie

    2010-01-01

    The Integrated Budget Office Toolbox (IBOT) combines budgeting, resource allocation, organizational funding, and reporting features in an automated, integrated tool that provides data from a single source for Johnson Space Center (JSC) personnel. Using a common interface, concurrent users can utilize the data without compromising its integrity. IBOT tracks planning changes and updates throughout the year using both phasing and POP-related (program-operating-plan-related) budget information for the current year, and up to six years out. Separating lump-sum funds received from HQ (Headquarters) into separate labor, travel, procurement, Center G&A (general & administrative), and servicepool categories, IBOT creates a script that significantly reduces manual input time. IBOT also manages the movement of travel and procurement funds down to the organizational level and, using its integrated funds management feature, helps better track funding at lower levels. Third-party software is used to create integrated reports in IBOT that can be generated for plans, actuals, funds received, and other combinations of data that are currently maintained in the centralized format. Based on Microsoft SQL, IBOT incorporates generic budget processes, is transportable, and is economical to deploy and support.

  11. Incremental online learning in high dimensions.

    PubMed

    Vijayakumar, Sethu; D'Souza, Aaron; Schaal, Stefan

    2005-12-01

    Locally weighted projection regression (LWPR) is a new algorithm for incremental nonlinear function approximation in high-dimensional spaces with redundant and irrelevant input dimensions. At its core, it employs nonparametric regression with locally linear models. In order to stay computationally efficient and numerically robust, each local model performs the regression analysis with a small number of univariate regressions in selected directions in input space in the spirit of partial least squares regression. We discuss when and how local learning techniques can successfully work in high-dimensional spaces and review the various techniques for local dimensionality reduction before finally deriving the LWPR algorithm. The properties of LWPR are that it (1) learns rapidly with second-order learning methods based on incremental training, (2) uses statistically sound stochastic leave-one-out cross validation for learning without the need to memorize training data, (3) adjusts its weighting kernels based on only local information in order to minimize the danger of negative interference of incremental learning, (4) has a computational complexity that is linear in the number of inputs, and (5) can deal with a large number of-possibly redundant-inputs, as shown in various empirical evaluations with up to 90 dimensional data sets. For a probabilistic interpretation, predictive variance and confidence intervals are derived. To our knowledge, LWPR is the first truly incremental spatially localized learning method that can successfully and efficiently operate in very high-dimensional spaces.

  12. Human-level control through deep reinforcement learning.

    PubMed

    Mnih, Volodymyr; Kavukcuoglu, Koray; Silver, David; Rusu, Andrei A; Veness, Joel; Bellemare, Marc G; Graves, Alex; Riedmiller, Martin; Fidjeland, Andreas K; Ostrovski, Georg; Petersen, Stig; Beattie, Charles; Sadik, Amir; Antonoglou, Ioannis; King, Helen; Kumaran, Dharshan; Wierstra, Daan; Legg, Shane; Hassabis, Demis

    2015-02-26

    The theory of reinforcement learning provides a normative account, deeply rooted in psychological and neuroscientific perspectives on animal behaviour, of how agents may optimize their control of an environment. To use reinforcement learning successfully in situations approaching real-world complexity, however, agents are confronted with a difficult task: they must derive efficient representations of the environment from high-dimensional sensory inputs, and use these to generalize past experience to new situations. Remarkably, humans and other animals seem to solve this problem through a harmonious combination of reinforcement learning and hierarchical sensory processing systems, the former evidenced by a wealth of neural data revealing notable parallels between the phasic signals emitted by dopaminergic neurons and temporal difference reinforcement learning algorithms. While reinforcement learning agents have achieved some successes in a variety of domains, their applicability has previously been limited to domains in which useful features can be handcrafted, or to domains with fully observed, low-dimensional state spaces. Here we use recent advances in training deep neural networks to develop a novel artificial agent, termed a deep Q-network, that can learn successful policies directly from high-dimensional sensory inputs using end-to-end reinforcement learning. We tested this agent on the challenging domain of classic Atari 2600 games. We demonstrate that the deep Q-network agent, receiving only the pixels and the game score as inputs, was able to surpass the performance of all previous algorithms and achieve a level comparable to that of a professional human games tester across a set of 49 games, using the same algorithm, network architecture and hyperparameters. This work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks.

  13. Human-level control through deep reinforcement learning

    NASA Astrophysics Data System (ADS)

    Mnih, Volodymyr; Kavukcuoglu, Koray; Silver, David; Rusu, Andrei A.; Veness, Joel; Bellemare, Marc G.; Graves, Alex; Riedmiller, Martin; Fidjeland, Andreas K.; Ostrovski, Georg; Petersen, Stig; Beattie, Charles; Sadik, Amir; Antonoglou, Ioannis; King, Helen; Kumaran, Dharshan; Wierstra, Daan; Legg, Shane; Hassabis, Demis

    2015-02-01

    The theory of reinforcement learning provides a normative account, deeply rooted in psychological and neuroscientific perspectives on animal behaviour, of how agents may optimize their control of an environment. To use reinforcement learning successfully in situations approaching real-world complexity, however, agents are confronted with a difficult task: they must derive efficient representations of the environment from high-dimensional sensory inputs, and use these to generalize past experience to new situations. Remarkably, humans and other animals seem to solve this problem through a harmonious combination of reinforcement learning and hierarchical sensory processing systems, the former evidenced by a wealth of neural data revealing notable parallels between the phasic signals emitted by dopaminergic neurons and temporal difference reinforcement learning algorithms. While reinforcement learning agents have achieved some successes in a variety of domains, their applicability has previously been limited to domains in which useful features can be handcrafted, or to domains with fully observed, low-dimensional state spaces. Here we use recent advances in training deep neural networks to develop a novel artificial agent, termed a deep Q-network, that can learn successful policies directly from high-dimensional sensory inputs using end-to-end reinforcement learning. We tested this agent on the challenging domain of classic Atari 2600 games. We demonstrate that the deep Q-network agent, receiving only the pixels and the game score as inputs, was able to surpass the performance of all previous algorithms and achieve a level comparable to that of a professional human games tester across a set of 49 games, using the same algorithm, network architecture and hyperparameters. This work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks.

  14. Derivation of formulas for root-mean-square errors in location, orientation, and shape in triangulation solution of an elongated object in space

    NASA Technical Reports Server (NTRS)

    Long, S. A. T.

    1974-01-01

    Formulas are derived for the root-mean-square (rms) displacement, slope, and curvature errors in an azimuth-elevation image trace of an elongated object in space, as functions of the number and spacing of the input data points and the rms elevation error in the individual input data points from a single observation station. Also, formulas are derived for the total rms displacement, slope, and curvature error vectors in the triangulation solution of an elongated object in space due to the rms displacement, slope, and curvature errors, respectively, in the azimuth-elevation image traces from different observation stations. The total rms displacement, slope, and curvature error vectors provide useful measure numbers for determining the relative merits of two or more different triangulation procedures applicable to elongated objects in space.

  15. The environment of inpatient healthcare delivery and its influence on the outcome of care.

    PubMed

    O'Connor, Margaret; O'Brien, Anthony; Bloomer, Melissa; Morphett, Julia; Peters, Louise; Hall, Helen; Parry, Arlene; Recoche, Katrina; Lee, Susan; Munro, Ian

    2012-01-01

    This paper addresses issues arising in the literature regarding the environmental design of inpatient healthcare settings and their impact on care. Environmental design in healthcare settings is an important feature of the holistic delivery of healthcare. The environmental influence of the delivery of care is manifested by such things as lighting, proximity to bedside, technology, family involvement, and space. The need to respond rapidly in places such as emergency and intensive care can override space needs for family support. In some settings with aging buildings, the available space is no longer appropriate to the needs-for example, the need for privacy in emergency departments. Many aspects of care have changed over the last three decades and the environment of care appears not to have been adapted to contemporary healthcare requirements nor involved consumers in ascertaining environmental requirements. The issues found in the literature are addressed under five themes: the design of physical space, family needs, privacy considerations, the impact of technology, and patient safety. There is a need for greater input into the design of healthcare spaces from those who use them, to incorporate dignified and expedient care delivery in the care of the person and to meet the needs of family.Preferred Citation: O'Connor, M., O'Brien, A., Bloomer, M., Morphett, J., Peters, L., Hall, H., … Munro, I. (2012). The environment of inpatient healthcare delivery and its influence on the outcome of care. Health Environments Research & Design Journal, 6(1), 105-117.

  16. Gravity Waves and Tidal Measurement Capabilities from a Space-borne Lidar across the Mesopause.

    NASA Astrophysics Data System (ADS)

    Dawkins, E. C. M.; Gardner, C. S.; Kaifler, B.; Marsh, D. R.; Janches, D.

    2017-12-01

    A new proposed NASA mission, ACaDAMe (Atmospheric Coupling and Dynamics Across the Mesopause region) consists of a space-borne sodium lidar, mounted upon the International Space Station. Combining the advantages of a lidar with the near-global coverage provided by the ISS (orbital inclination: 51.6o, orbital period: 92.7 mins), the ACaDAMe mission has enormous potential to quantify the waves that provide the major momentum and energy forcing of the Ionosphere-Thermosphere-Mesosphere system from below. Specifically, this mission seeks to quantify the dominant wave momentum and energy inputs across the mesopause, and identify the near-global distribution of gravity waves and tides that impact the Thermosphere/Ionosphere and are the terrestrial drivers of Space Weather. Leveraging on existing instrument heritage and expertise, this nadir-pointing narrowband lidar would be tuned to two-frequencies (at the peak of the D2a line, and at the minimum between the D2a and D2b peaks), with a capability to retrieve vertically-resolved [Na] and temperature, T, for both nighttime and daytime conditions. Here we outline the proposed mission, present an error characterization for [Na] and T, and describe the capabilities to estimate gravity waves and tidal features which will provide a crucial role in advancing our understanding of small-scale dynamical processes and coupling across this important atmospheric region.

  17. Comparative evaluation of support vector machine classification for computer aided detection of breast masses in mammography

    NASA Astrophysics Data System (ADS)

    Lesniak, J. M.; Hupse, R.; Blanc, R.; Karssemeijer, N.; Székely, G.

    2012-08-01

    False positive (FP) marks represent an obstacle for effective use of computer-aided detection (CADe) of breast masses in mammography. Typically, the problem can be approached either by developing more discriminative features or by employing different classifier designs. In this paper, the usage of support vector machine (SVM) classification for FP reduction in CADe is investigated, presenting a systematic quantitative evaluation against neural networks, k-nearest neighbor classification, linear discriminant analysis and random forests. A large database of 2516 film mammography examinations and 73 input features was used to train the classifiers and evaluate for their performance on correctly diagnosed exams as well as false negatives. Further, classifier robustness was investigated using varying training data and feature sets as input. The evaluation was based on the mean exam sensitivity in 0.05-1 FPs on normals on the free-response receiver operating characteristic curve (FROC), incorporated into a tenfold cross validation framework. It was found that SVM classification using a Gaussian kernel offered significantly increased detection performance (P = 0.0002) compared to the reference methods. Varying training data and input features, SVMs showed improved exploitation of large feature sets. It is concluded that with the SVM-based CADe a significant reduction of FPs is possible outperforming other state-of-the-art approaches for breast mass CADe.

  18. Crater Identification Algorithm for the Lost in Low Lunar Orbit Scenario

    NASA Technical Reports Server (NTRS)

    Hanak, Chad; Crain, TImothy

    2010-01-01

    Recent emphasis by NASA on returning astronauts to the Moon has placed attention on the subject of lunar surface feature tracking. Although many algorithms have been proposed for lunar surface feature tracking navigation, much less attention has been paid to the issue of navigational state initialization from lunar craters in a lost in low lunar orbit (LLO) scenario. That is, a scenario in which lunar surface feature tracking must begin, but current navigation state knowledge is either unavailable or too poor to initiate a tracking algorithm. The situation is analogous to the lost in space scenario for star trackers. A new crater identification algorithm is developed herein that allows for navigation state initialization from as few as one image of the lunar surface with no a priori state knowledge. The algorithm takes as inputs the locations and diameters of craters that have been detected in an image, and uses the information to match the craters to entries in the USGS lunar crater catalog via non-dimensional crater triangle parameters. Due to the large number of uncataloged craters that exist on the lunar surface, a probability-based check was developed to reject false identifications. The algorithm was tested on craters detected in four revolutions of Apollo 16 LLO images, and shown to perform well.

  19. An activity recognition model using inertial sensor nodes in a wireless sensor network for frozen shoulder rehabilitation exercises.

    PubMed

    Lin, Hsueh-Chun; Chiang, Shu-Yin; Lee, Kai; Kan, Yao-Chiang

    2015-01-19

    This paper proposes a model for recognizing motions performed during rehabilitation exercises for frozen shoulder conditions. The model consists of wearable wireless sensor network (WSN) inertial sensor nodes, which were developed for this study, and enables the ubiquitous measurement of bodily motions. The model employs the back propagation neural network (BPNN) algorithm to compute motion data that are formed in the WSN packets; herein, six types of rehabilitation exercises were recognized. The packets sent by each node are converted into six components of acceleration and angular velocity according to three axes. Motor features such as basic acceleration, angular velocity, and derivative tilt angle were input into the training procedure of the BPNN algorithm. In measurements of thirteen volunteers, the accelerations and included angles of nodes were adopted from possible features to demonstrate the procedure. Five exercises involving simple swinging and stretching movements were recognized with an accuracy of 85%-95%; however, the accuracy with which exercises entailing spiral rotations were recognized approximately 60%. Thus, a characteristic space and enveloped spectrum improving derivative features were suggested to enable identifying customized parameters. Finally, a real-time monitoring interface was developed for practical implementation. The proposed model can be applied in ubiquitous healthcare self-management to recognize rehabilitation exercises.

  20. Properties of heuristic search strategies

    NASA Technical Reports Server (NTRS)

    Vanderbrug, G. J.

    1973-01-01

    A directed graph is used to model the search space of a state space representation with single input operators, an AND/OR is used for problem reduction representations, and a theorem proving graph is used for state space representations with multiple input operators. These three graph models and heuristic strategies for searching them are surveyed. The completeness, admissibility, and optimality properties of search strategies which use the evaluation function f = (1 - omega)g = omega(h) are presented and interpreted using a representation of the search process in the plane. The use of multiple output operators to imply dependent successors, and thus obtain a formalism which includes all three types of representations, is discussed.

  1. Modal survey of the space shuttle solid rocket motor using multiple input methods

    NASA Technical Reports Server (NTRS)

    Brillhart, Ralph; Hunt, David L.; Jensen, Brent M.; Mason, Donald R.

    1987-01-01

    The ability to accurately characterize propellant in a finite element model is a concern of engineers tasked with studying the dynamic response of the Space Shuttle Solid Rocket Motor (SRM). THe uncertainties arising from propellant characterization through specimem testing led to the decision to perform a model survey and model correlation of a single segment of the Shuttle SRM. Multiple input methods were used to excite and define case/propellant modes of both an inert segment and, later, a live propellant segment. These tests were successful at defining highly damped, flexible modes, several pairs of which occured with frequency spacing of less than two percent.

  2. Automated Feature Identification and Classification Using Automated Feature Weighted Self Organizing Map (FWSOM)

    NASA Astrophysics Data System (ADS)

    Starkey, Andrew; Usman Ahmad, Aliyu; Hamdoun, Hassan

    2017-10-01

    This paper investigates the application of a novel method for classification called Feature Weighted Self Organizing Map (FWSOM) that analyses the topology information of a converged standard Self Organizing Map (SOM) to automatically guide the selection of important inputs during training for improved classification of data with redundant inputs, examined against two traditional approaches namely neural networks and Support Vector Machines (SVM) for the classification of EEG data as presented in previous work. In particular, the novel method looks to identify the features that are important for classification automatically, and in this way the important features can be used to improve the diagnostic ability of any of the above methods. The paper presents the results and shows how the automated identification of the important features successfully identified the important features in the dataset and how this results in an improvement of the classification results for all methods apart from linear discriminatory methods which cannot separate the underlying nonlinear relationship in the data. The FWSOM in addition to achieving higher classification accuracy has given insights into what features are important in the classification of each class (left and right-hand movements), and these are corroborated by already published work in this area.

  3. Modeling of biaxial gimbal-less MEMS scanning mirrors

    NASA Astrophysics Data System (ADS)

    von Wantoch, Thomas; Gu-Stoppel, Shanshan; Senger, Frank; Mallas, Christian; Hofmann, Ulrich; Meurer, Thomas; Benecke, Wolfgang

    2016-03-01

    One- and two-dimensional MEMS scanning mirrors for resonant or quasi-stationary beam deflection are primarily known as tiny micromirror devices with aperture sizes up to a few Millimeters and usually address low power applications in high volume markets, e.g. laser beam scanning pico-projectors or gesture recognition systems. In contrast, recently reported vacuum packaged MEMS scanners feature mirror diameters up to 20 mm and integrated high-reflectivity dielectric coatings. These mirrors enable MEMS based scanning for applications that require large apertures due to optical constraints like 3D sensing or microscopy as well as for high power laser applications like laser phosphor displays, automotive lighting and displays, 3D printing and general laser material processing. This work presents modelling, control design and experimental characterization of gimbal-less MEMS mirrors with large aperture size. As an example a resonant biaxial Quadpod scanner with 7 mm mirror diameter and four integrated PZT (lead zirconate titanate) actuators is analyzed. The finite element method (FEM) model developed and computed in COMSOL Multiphysics is used for calculating the eigenmodes of the mirror as well as for extracting a high order (n < 10000) state space representation of the mirror dynamics with actuation voltages as system inputs and scanner displacement as system output. By applying model order reduction techniques using MATLABR a compact state space system approximation of order n = 6 is computed. Based on this reduced order model feedforward control inputs for different, properly chosen scanner displacement trajectories are derived and tested using the original FEM model as well as the micromirror.

  4. Compact optical processor for Hough and frequency domain features

    NASA Astrophysics Data System (ADS)

    Ott, Peter

    1996-11-01

    Shape recognition is necessary in a broad band of applications such as traffic sign or work piece recognition. It requires not only neighborhood processing of the input image pixels but global interconnection of them. The Hough transform (HT) performs such a global operation and it is well suited in the preprocessing stage of a shape recognition system. Translation invariant features can be easily calculated form the Hough domain. We have implemented on the computer a neural network shape recognition system which contains a HT, a feature extraction, and a classification layer. The advantage of this approach is that the total system can be optimized with well-known learning techniques and that it can explore the parallelism of the algorithms. However, the HT is a time consuming operation. Parallel, optical processing is therefore advantageous. Several systems have been proposed, based on space multiplexing with arrays of holograms and CGH's or time multiplexing with acousto-optic processors or by image rotation with incoherent and coherent astigmatic optical processors. We took up the last mentioned approach because 2D array detectors are read out line by line, so a 2D detector can achieve the same speed and is easier to implement. Coherent processing can allow the implementation of tilers in the frequency domain. Features based on wedge/ring, Gabor, or wavelet filters have been proven to show good discrimination capabilities for texture and shape recognition. The astigmatic lens system which is derived form the mathematical formulation of the HT is long and contains a non-standard, astigmatic element. By methods of lens transformation s for coherent applications we map the original design to a shorter lens with a smaller number of well separated standard elements and with the same coherent system response. The final lens design still contains the frequency plane for filtering and ray-tracing shows diffraction limited performance. Image rotation can be done optically by a rotating prism. We realize it on a fast FLC- SLM of our lab as input device. The filters can be implemented on the same type of SLM with 128 by 128 square pixels of size, resulting in a total length of the lens of less than 50cm.

  5. Direct model reference adaptive control of a flexible robotic manipulator

    NASA Technical Reports Server (NTRS)

    Meldrum, D. R.

    1985-01-01

    Quick, precise control of a flexible manipulator in a space environment is essential for future Space Station repair and satellite servicing. Numerous control algorithms have proven successful in controlling rigid manipulators wih colocated sensors and actuators; however, few have been tested on a flexible manipulator with noncolocated sensors and actuators. In this thesis, a model reference adaptive control (MRAC) scheme based on command generator tracker theory is designed for a flexible manipulator. Quicker, more precise tracking results are expected over nonadaptive control laws for this MRAC approach. Equations of motion in modal coordinates are derived for a single-link, flexible manipulator with an actuator at the pinned-end and a sensor at the free end. An MRAC is designed with the objective of controlling the torquing actuator so that the tip position follows a trajectory that is prescribed by the reference model. An appealing feature of this direct MRAC law is that it allows the reference model to have fewer states than the plant itself. Direct adaptive control also adjusts the controller parameters directly with knowledge of only the plant output and input signals.

  6. Three-Phase 3D Reconstruction of a LiCoO 2 Cathode via FIB-SEM Tomography

    DOE PAGES

    Liu, Zhao; Chen-Wiegart, Yu-chen K.; Wang, Jun; ...

    2016-01-14

    Three-phase three-dimensional (3D) microstructural reconstructions of lithium-ion battery electrodes are critical input for 3D simulations of electrode lithiation/delithiation, which provide a detailed understanding of battery operation. In this report, 3D images of a LiCoO 2electrode are achieved using focused ion beam-scanning electron microscopy (FIB-SEM), with clear contrast among the three phases: LiCoO 2particles, carbonaceous phases (carbon and binder) and the electrolyte space. The good contrast was achieved by utilizing an improved FIB-SEM sample preparation method that combined infiltration of the electrolyte space with a low-viscosity silicone resin and triple ion-beam polishing. Morphological parameters quantified include phase volume fraction, surface area,more » feature size distribution, connectivity, and tortuosity. Electrolyte tortuosity was determined using two different geometric calculations that were in good agreement. In conclusion, the electrolyte tortuosity distribution versus position within the electrode was found to be highly inhomogeneous; this will lead to inhomogeneous electrode lithiation/delithiation at high C-rates that could potentially cause battery degradation.« less

  7. The Unification Space implemented as a localist neural net: predictions and error-tolerance in a constraint-based parser.

    PubMed

    Vosse, Theo; Kempen, Gerard

    2009-12-01

    We introduce a novel computer implementation of the Unification-Space parser (Vosse and Kempen in Cognition 75:105-143, 2000) in the form of a localist neural network whose dynamics is based on interactive activation and inhibition. The wiring of the network is determined by Performance Grammar (Kempen and Harbusch in Verb constructions in German and Dutch. Benjamins, Amsterdam, 2003), a lexicalist formalism with feature unification as binding operation. While the network is processing input word strings incrementally, the evolving shape of parse trees is represented in the form of changing patterns of activation in nodes that code for syntactic properties of words and phrases, and for the grammatical functions they fulfill. The system is capable, at least qualitatively and rudimentarily, of simulating several important dynamic aspects of human syntactic parsing, including garden-path phenomena and reanalysis, effects of complexity (various types of clause embeddings), fault-tolerance in case of unification failures and unknown words, and predictive parsing (expectation-based analysis, surprisal effects). English is the target language of the parser described.

  8. Signal decomposition for surrogate modeling of a constrained ultrasonic design space

    NASA Astrophysics Data System (ADS)

    Homa, Laura; Sparkman, Daniel; Wertz, John; Welter, John; Aldrin, John C.

    2018-04-01

    The U.S. Air Force seeks to improve the methods and measures by which the lifecycle of composite structures are managed. Nondestructive evaluation of damage - particularly internal damage resulting from impact - represents a significant input to that improvement. Conventional ultrasound can detect this damage; however, full 3D characterization has not been demonstrated. A proposed approach for robust characterization uses model-based inversion through fitting of simulated results to experimental data. One challenge with this approach is the high computational expense of the forward model to simulate the ultrasonic B-scans for each damage scenario. A potential solution is to construct a surrogate model using a subset of simulated ultrasonic scans built using a highly accurate, computationally expensive forward model. However, the dimensionality of these simulated B-scans makes interpolating between them a difficult and potentially infeasible problem. Thus, we propose using the chirplet decomposition to reduce the dimensionality of the data, and allow for interpolation in the chirplet parameter space. By applying the chirplet decomposition, we are able to extract the salient features in the data and construct a surrogate forward model.

  9. Prediction of hourly PM2.5 using a space-time support vector regression model

    NASA Astrophysics Data System (ADS)

    Yang, Wentao; Deng, Min; Xu, Feng; Wang, Hang

    2018-05-01

    Real-time air quality prediction has been an active field of research in atmospheric environmental science. The existing methods of machine learning are widely used to predict pollutant concentrations because of their enhanced ability to handle complex non-linear relationships. However, because pollutant concentration data, as typical geospatial data, also exhibit spatial heterogeneity and spatial dependence, they may violate the assumptions of independent and identically distributed random variables in most of the machine learning methods. As a result, a space-time support vector regression model is proposed to predict hourly PM2.5 concentrations. First, to address spatial heterogeneity, spatial clustering is executed to divide the study area into several homogeneous or quasi-homogeneous subareas. To handle spatial dependence, a Gauss vector weight function is then developed to determine spatial autocorrelation variables as part of the input features. Finally, a local support vector regression model with spatial autocorrelation variables is established for each subarea. Experimental data on PM2.5 concentrations in Beijing are used to verify whether the results of the proposed model are superior to those of other methods.

  10. Attention model of binocular rivalry

    PubMed Central

    Rankin, James; Rinzel, John; Carrasco, Marisa; Heeger, David J.

    2017-01-01

    When the corresponding retinal locations in the two eyes are presented with incompatible images, a stable percept gives way to perceptual alternations in which the two images compete for perceptual dominance. As perceptual experience evolves dynamically under constant external inputs, binocular rivalry has been used for studying intrinsic cortical computations and for understanding how the brain regulates competing inputs. Converging behavioral and EEG results have shown that binocular rivalry and attention are intertwined: binocular rivalry ceases when attention is diverted away from the rivalry stimuli. In addition, the competing image in one eye suppresses the target in the other eye through a pattern of gain changes similar to those induced by attention. These results require a revision of the current computational theories of binocular rivalry, in which the role of attention is ignored. Here, we provide a computational model of binocular rivalry. In the model, competition between two images in rivalry is driven by both attentional modulation and mutual inhibition, which have distinct selectivity (feature vs. eye of origin) and dynamics (relatively slow vs. relatively fast). The proposed model explains a wide range of phenomena reported in rivalry, including the three hallmarks: (i) binocular rivalry requires attention; (ii) various perceptual states emerge when the two images are swapped between the eyes multiple times per second; (iii) the dominance duration as a function of input strength follows Levelt’s propositions. With a bifurcation analysis, we identified the parameter space in which the model’s behavior was consistent with experimental results. PMID:28696323

  11. The Jet Propulsion Laboratory shared control architecture and implementation

    NASA Technical Reports Server (NTRS)

    Backes, Paul G.; Hayati, Samad

    1990-01-01

    A hardware and software environment for shared control of telerobot task execution has been implemented. Modes of task execution range from fully teleoperated to fully autonomous as well as shared where hand controller inputs from the human operator are mixed with autonomous system inputs in real time. The objective of the shared control environment is to aid the telerobot operator during task execution by merging real-time operator control from hand controllers with autonomous control to simplify task execution for the operator. The operator is the principal command source and can assign as much autonomy for a task as desired. The shared control hardware environment consists of two PUMA 560 robots, two 6-axis force reflecting hand controllers, Universal Motor Controllers for each of the robots and hand controllers, a SUN4 computer, and VME chassis containing 68020 processors and input/output boards. The operator interface for shared control, the User Macro Interface (UMI), is a menu driven interface to design a task and assign the levels of teleoperated and autonomous control. The operator also sets up the system monitor which checks safety limits during task execution. Cartesian-space degrees of freedom for teleoperated and/or autonomous control inputs are selected within UMI as well as the weightings for the teleoperation and autonmous inputs. These are then used during task execution to determine the mix of teleoperation and autonomous inputs. Some of the autonomous control primitives available to the user are Joint-Guarded-Move, Cartesian-Guarded-Move, Move-To-Touch, Pin-Insertion/Removal, Door/Crank-Turn, Bolt-Turn, and Slide. The operator can execute a task using pure teleoperation or mix control execution from the autonomous primitives with teleoperated inputs. Presently the shared control environment supports single arm task execution. Work is presently underway to provide the shared control environment for dual arm control. Teleoperation during shared control is only Cartesian space control and no force-reflection is provided. Force-reflecting teleoperation and joint space operator inputs are planned extensions to the environment.

  12. RAVEN User Manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mandelli, Diego; Rabiti, Cristian; Cogliati, Joshua Joseph

    2015-10-01

    RAVEN is a generic software framework to perform parametric and probabilistic analysis based on the response of complex system codes. The initial development was aimed to provide dynamic risk analysis capabilities to the Thermo-Hydraulic code RELAP-7, currently under development at the Idaho National Laboratory (INL). Although the initial goal has been fully accomplished, RAVEN is now a multi-purpose probabilistic and uncertainty quantification platform, capable to agnostically communicate with any system code. This agnosticism includes providing Application Programming Interfaces (APIs). These APIs are used to allow RAVEN to interact with any code as long as all the parameters that need tomore » be perturbed are accessible by inputs files or via python interfaces. RAVEN is capable of investigating the system response, and investigating the input space using Monte Carlo, Grid, or Latin Hyper Cube sampling schemes, but its strength is focused toward system feature discovery, such as limit surfaces, separating regions of the input space leading to system failure, using dynamic supervised learning techniques. The development of RAVEN has started in 2012, when, within the Nuclear Energy Advanced Modeling and Simulation (NEAMS) program, the need to provide a modern risk evaluation framework became stronger. RAVEN principal assignment is to provide the necessary software and algorithms in order to employ the concept developed by the Risk Informed Safety Margin Characterization (RISMC) program. RISMC is one of the pathways defined within the Light Water Reactor Sustainability (LWRS) program. In the RISMC approach, the goal is not just the individuation of the frequency of an event potentially leading to a system failure, but the closeness (or not) to key safety-related events. Hence, the approach is interested in identifying and increasing the safety margins related to those events. A safety margin is a numerical value quantifying the probability that a safety metric (e.g. for an important process such as peak pressure in a pipe) is exceeded under certain conditions. The initial development of RAVEN has been focused on providing dynamic risk assessment capability to RELAP-7, currently under development at the INL and, likely, future replacement of the RELAP5-3D code. Most the capabilities that have been implemented having RELAP-7 as principal focus are easily deployable for other system codes. For this reason, several side activaties are currently ongoing for coupling RAVEN with software such as RELAP5-3D, etc. The aim of this document is the explanation of the input requirements, focalizing on the input structure.« less

  13. RAVEN User Manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mandelli, Diego; Rabiti, Cristian; Cogliati, Joshua Joseph

    2016-02-01

    RAVEN is a generic software framework to perform parametric and probabilistic analysis based on the response of complex system codes. The initial development was aimed to provide dynamic risk analysis capabilities to the Thermo-Hydraulic code RELAP-7, currently under development at the Idaho National Laboratory (INL). Although the initial goal has been fully accomplished, RAVEN is now a multi-purpose probabilistic and uncertainty quantification platform, capable to agnostically communicate with any system code. This agnosticism includes providing Application Programming Interfaces (APIs). These APIs are used to allow RAVEN to interact with any code as long as all the parameters that need tomore » be perturbed are accessible by input files or via python interfaces. RAVEN is capable of investigating the system response, and investigating the input space using Monte Carlo, Grid, or Latin Hyper Cube sampling schemes, but its strength is focused toward system feature discovery, such as limit surfaces, separating regions of the input space leading to system failure, using dynamic supervised learning techniques. The development of RAVEN started in 2012, when, within the Nuclear Energy Advanced Modeling and Simulation (NEAMS) program, the need to provide a modern risk evaluation framework became stronger. RAVEN principal assignment is to provide the necessary software and algorithms in order to employ the concept developed by the Risk Informed Safety Margin Characterization (RISMC) program. RISMC is one of the pathways defined within the Light Water Reactor Sustainability (LWRS) program. In the RISMC approach, the goal is not just the individuation of the frequency of an event potentially leading to a system failure, but the closeness (or not) to key safety-related events. Hence, the approach is interested in identifying and increasing the safety margins related to those events. A safety margin is a numerical value quantifying the probability that a safety metric (e.g. for an important process such as peak pressure in a pipe) is exceeded under certain conditions. The initial development of RAVEN has been focused on providing dynamic risk assessment capability to RELAP-7, currently under development at the INL and, likely, future replacement of the RELAP5-3D code. Most the capabilities that have been implemented having RELAP-7 as principal focus are easily deployable for other system codes. For this reason, several side activates are currently ongoing for coupling RAVEN with software such as RELAP5-3D, etc. The aim of this document is the explanation of the input requirements, focusing on the input structure.« less

  14. RAVEN User Manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mandelli, Diego; Rabiti, Cristian; Cogliati, Joshua Joseph

    2017-03-01

    RAVEN is a generic software framework to perform parametric and probabilistic analy- sis based on the response of complex system codes. The initial development was aimed to provide dynamic risk analysis capabilities to the Thermo-Hydraulic code RELAP-7, currently under development at the Idaho National Laboratory (INL). Although the initial goal has been fully accomplished, RAVEN is now a multi-purpose probabilistic and uncer- tainty quantification platform, capable to agnostically communicate with any system code. This agnosticism includes providing Application Programming Interfaces (APIs). These APIs are used to allow RAVEN to interact with any code as long as all the parameters thatmore » need to be perturbed are accessible by inputs files or via python interfaces. RAVEN is capable of investigating the system response, and investigating the input space using Monte Carlo, Grid, or Latin Hyper Cube sampling schemes, but its strength is focused to- ward system feature discovery, such as limit surfaces, separating regions of the input space leading to system failure, using dynamic supervised learning techniques. The development of RAVEN has started in 2012, when, within the Nuclear Energy Advanced Modeling and Simulation (NEAMS) program, the need to provide a modern risk evaluation framework became stronger. RAVEN principal assignment is to provide the necessary software and algorithms in order to employ the concept developed by the Risk Informed Safety Margin Characterization (RISMC) program. RISMC is one of the pathways defined within the Light Water Reactor Sustainability (LWRS) program. In the RISMC approach, the goal is not just the individuation of the frequency of an event potentially leading to a system failure, but the closeness (or not) to key safety-related events. Hence, the approach is in- terested in identifying and increasing the safety margins related to those events. A safety margin is a numerical value quantifying the probability that a safety metric (e.g. for an important process such as peak pressure in a pipe) is exceeded under certain conditions. The initial development of RAVEN has been focused on providing dynamic risk assess- ment capability to RELAP-7, currently under develop-ment at the INL and, likely, future replacement of the RELAP5-3D code. Most the capabilities that have been implemented having RELAP-7 as principal focus are easily deployable for other system codes. For this reason, several side activates are currently ongoing for coupling RAVEN with soft- ware such as RELAP5-3D, etc. The aim of this document is the explaination of the input requirements, focalizing on the input structure.« less

  15. Evaluation of Spectral and Prosodic Features of Speech Affected by Orthodontic Appliances Using the Gmm Classifier

    NASA Astrophysics Data System (ADS)

    Přibil, Jiří; Přibilová, Anna; Ďuračkoá, Daniela

    2014-01-01

    The paper describes our experiment with using the Gaussian mixture models (GMM) for classification of speech uttered by a person wearing orthodontic appliances. For the GMM classification, the input feature vectors comprise the basic and the complementary spectral properties as well as the supra-segmental parameters. Dependence of classification correctness on the number of the parameters in the input feature vector and on the computation complexity is also evaluated. In addition, an influence of the initial setting of the parameters for GMM training process was analyzed. Obtained recognition results are compared visually in the form of graphs as well as numerically in the form of tables and confusion matrices for tested sentences uttered using three configurations of orthodontic appliances.

  16. A novel comparator featured with input data characteristic

    NASA Astrophysics Data System (ADS)

    Jiang, Xiaobo; Ye, Desheng; Xu, Xiangmin; Zheng, Shuai

    2016-03-01

    Two types of low-power asynchronous comparators featured with input data statistical characteristic are proposed in this article. The asynchronous ripple comparator stops comparing at the first unequal bit but delivers the result to the least significant bit. The pre-stop asynchronous comparator can completely stop comparing and obtain results immediately. The proposed and contrastive comparators were implemented in SMIC 0.18 μm process with different bit widths. Simulation shows that the proposed pre-stop asynchronous comparator features the lowest power consumption, shortest average propagation delay and highest area efficiency among the comparators. Data path of low-density parity check decoder using the proposed pre-stop asynchronous comparators are most power efficient compared with other data paths with synthesised, clock gating and bitwise competition logic comparators.

  17. Clustering by reordering of similarity and Laplacian matrices: Application to galaxy clusters

    NASA Astrophysics Data System (ADS)

    Mahmoud, E.; Shoukry, A.; Takey, A.

    2018-04-01

    Similarity metrics, kernels and similarity-based algorithms have gained much attention due to their increasing applications in information retrieval, data mining, pattern recognition and machine learning. Similarity Graphs are often adopted as the underlying representation of similarity matrices and are at the origin of known clustering algorithms such as spectral clustering. Similarity matrices offer the advantage of working in object-object (two-dimensional) space where visualization of clusters similarities is available instead of object-features (multi-dimensional) space. In this paper, sparse ɛ-similarity graphs are constructed and decomposed into strong components using appropriate methods such as Dulmage-Mendelsohn permutation (DMperm) and/or Reverse Cuthill-McKee (RCM) algorithms. The obtained strong components correspond to groups (clusters) in the input (feature) space. Parameter ɛi is estimated locally, at each data point i from a corresponding narrow range of the number of nearest neighbors. Although more advanced clustering techniques are available, our method has the advantages of simplicity, better complexity and direct visualization of the clusters similarities in a two-dimensional space. Also, no prior information about the number of clusters is needed. We conducted our experiments on two and three dimensional, low and high-sized synthetic datasets as well as on an astronomical real-dataset. The results are verified graphically and analyzed using gap statistics over a range of neighbors to verify the robustness of the algorithm and the stability of the results. Combining the proposed algorithm with gap statistics provides a promising tool for solving clustering problems. An astronomical application is conducted for confirming the existence of 45 galaxy clusters around the X-ray positions of galaxy clusters in the redshift range [0.1..0.8]. We re-estimate the photometric redshifts of the identified galaxy clusters and obtain acceptable values compared to published spectroscopic redshifts with a 0.029 standard deviation of their differences.

  18. Reverse engineering the face space: Discovering the critical features for face identification.

    PubMed

    Abudarham, Naphtali; Yovel, Galit

    2016-01-01

    How do we identify people? What are the critical facial features that define an identity and determine whether two faces belong to the same person or different people? To answer these questions, we applied the face space framework, according to which faces are represented as points in a multidimensional feature space, such that face space distances are correlated with perceptual similarities between faces. In particular, we developed a novel method that allowed us to reveal the critical dimensions (i.e., critical features) of the face space. To that end, we constructed a concrete face space, which included 20 facial features of natural face images, and asked human observers to evaluate feature values (e.g., how thick are the lips). Next, we systematically and quantitatively changed facial features, and measured the perceptual effects of these manipulations. We found that critical features were those for which participants have high perceptual sensitivity (PS) for detecting differences across identities (e.g., which of two faces has thicker lips). Furthermore, these high PS features vary minimally across different views of the same identity, suggesting high PS features support face recognition across different images of the same face. The methods described here set an infrastructure for discovering the critical features of other face categories not studied here (e.g., Asians, familiar) as well as other aspects of face processing, such as attractiveness or trait inferences.

  19. Flatness-based control in successive loops for stabilization of heart's electrical activity

    NASA Astrophysics Data System (ADS)

    Rigatos, Gerasimos; Melkikh, Alexey

    2016-12-01

    The article proposes a new flatness-based control method implemented in successive loops which allows for stabilization of the heart's electrical activity. Heart's pacemaking function is modeled as a set of coupled oscillators which potentially can exhibit chaotic behavior. It is shown that this model satisfies differential flatness properties. Next, the control and stabilization of this model is performed with the use of flatness-based control implemented in cascading loops. By applying a per-row decomposition of the state-space model of the coupled oscillators a set of nonlinear differential equations is obtained. Differential flatness properties are shown to hold for the subsystems associated with the each one of the aforementioned differential equations and next a local flatness-based controller is designed for each subsystem. For the i-th subsystem, state variable xi is chosen to be the flat output and state variable xi+1 is taken to be a virtual control input. Then the value of the virtual control input which eliminates the output tracking error for the i-th subsystem becomes reference setpoint for the i + 1-th subsystem. In this manner the control of the entire state-space model is performed by successive flatness-based control loops. By arriving at the n-th row of the state-space model one computes the control input that can be actually exerted on the aforementioned biosystem. This real control input of the coupled oscillators' system, contains recursively all virtual control inputs associated with the previous n - 1 rows of the state-space model. This control approach achieves asymptotically the elimination of the chaotic oscillation effects and the stabilization of the heart's pulsation rhythm. The stability of the proposed control scheme is proven with the use of Lyapunov analysis.

  20. Formulation d'un modele mathematique par des techniques d'estimation de parametres a partir de donnees de vol pour l'helicoptere Bell 427 et l'avion F/A-18 servant a la recherches en aeroservoelasticite

    NASA Astrophysics Data System (ADS)

    Nadeau-Beaulieu, Michel

    In this thesis, three mathematical models are built from flight test data for different aircraft design applications: a ground dynamics model for the Bell 427 helicopter, a prediction model for the rotor and engine parameters for the same helicopter type and a simulation model for the aeroelastic deflections of the F/A-18. In the ground dynamics application, the model structure is derived from physics where the normal force between the helicopter and the ground is modelled as a vertical spring and the frictional force is modelled with static and dynamic friction coefficients. The ground dynamics model coefficients are optimized to ensure that the model matches the landing data within the FAA (Federal Aviation Administration) tolerance bands for a level D flight simulator. In the rotor and engine application, rotors torques (main and tail), the engine torque and main rotor speed are estimated using a state-space model. The model inputs are nonlinear terms derived from the pilot control inputs and the helicopter states. The model parameters are identified using the subspace method and are further optimised with the Levenberg-Marquardt minimisation algorithm. The model built with the subspace method provides an excellent estimate of the outputs within the FAA tolerance bands. The F/A-18 aeroelastic state-space model is built from flight test. The research concerning this model is divided in two parts. Firstly, the deflection of a given structural surface on the aircraft following a differential ailerons control input is represented by a Multiple Inputs Single Outputs linear model whose inputs are the ailerons positions and the structural surfaces deflections. Secondly, a single state-space model is used to represent the deflection of the aircraft wings and trailing edge flaps following any control input. In this case the model is made non-linear by multiplying model inputs into higher order terms and using these terms as the inputs of the state-space equations. In both cases, the identification method is the subspace method. Most fit coefficients between the estimated and the measured signals are above 73% and most correlation coefficient are higher than 90%.

  1. An accelerated training method for back propagation networks

    NASA Technical Reports Server (NTRS)

    Shelton, Robert O. (Inventor)

    1993-01-01

    The principal objective is to provide a training procedure for a feed forward, back propagation neural network which greatly accelerates the training process. A set of orthogonal singular vectors are determined from the input matrix such that the standard deviations of the projections of the input vectors along these singular vectors, as a set, are substantially maximized, thus providing an optimal means of presenting the input data. Novelty exists in the method of extracting from the set of input data, a set of features which can serve to represent the input data in a simplified manner, thus greatly reducing the time/expense to training the system.

  2. Activity Recognition in Egocentric video using SVM, kNN and Combined SVMkNN Classifiers

    NASA Astrophysics Data System (ADS)

    Sanal Kumar, K. P.; Bhavani, R., Dr.

    2017-08-01

    Egocentric vision is a unique perspective in computer vision which is human centric. The recognition of egocentric actions is a challenging task which helps in assisting elderly people, disabled patients and so on. In this work, life logging activity videos are taken as input. There are 2 categories, first one is the top level and second one is second level. Here, the recognition is done using the features like Histogram of Oriented Gradients (HOG), Motion Boundary Histogram (MBH) and Trajectory. The features are fused together and it acts as a single feature. The extracted features are reduced using Principal Component Analysis (PCA). The features that are reduced are provided as input to the classifiers like Support Vector Machine (SVM), k nearest neighbor (kNN) and combined Support Vector Machine (SVM) and k Nearest Neighbor (kNN) (combined SVMkNN). These classifiers are evaluated and the combined SVMkNN provided better results than other classifiers in the literature.

  3. Challenges and Issues of Radiation Damage Tools for Space Missions

    NASA Astrophysics Data System (ADS)

    Tripathi, Ram; Wilson, John

    2006-04-01

    NASA has a new vision for space exploration in the 21st Century encompassing a broad range of human and robotic missions including missions to Moon, Mars and beyond. Exposure from the hazards of severe space radiation in deep space long duration missions is `the show stopper.' Thus, protection from the hazards of severe space radiation is of paramount importance for the new vision. Accurate risk assessments critically depend on the accuracy of the input information about the interaction of ions with materials, electronics and tissues. A huge amount of essential experimental information for all the ions in space, across the periodic table, for a wide range of energies of several (up to a Trillion) orders of magnitude are needed for the radiation protection engineering for space missions that is simply not available (due to the high costs) and probably never will be. In addition, the accuracy of the input information and database is very critical and of paramount importance for space exposure assessments particularly in view the agency's vision for deep space exploration. The vital role and importance of nuclear physics, related challenges and issues, for space missions will be discussed, and a few examples will be presented for space missions.

  4. Analytical modeling of operating characteristics of premixing-prevaporizing fuel-air mixing passages. Volume 2: User's manual

    NASA Technical Reports Server (NTRS)

    Anderson, O. L.; Chiappetta, L. M.; Edwards, D. E.; Mcvey, J. B.

    1982-01-01

    A user's manual describing the operation of three computer codes (ADD code, PTRAK code, and VAPDIF code) is presented. The general features of the computer codes, the input/output formats, run streams, and sample input cases are described.

  5. An Investigation of the Application of Artificial Neural Networks to Adaptive Optics Imaging Systems

    DTIC Science & Technology

    1991-12-01

    neural network and the feedforward neural network studied is the single layer perceptron artificial neural network . The recurrent artificial neural network input...features are the wavefront sensor slope outputs and neighboring actuator feedback commands. The feedforward artificial neural network input

  6. Muscle synergies in neuroscience and robotics: from input-space to task-space perspectives.

    PubMed

    Alessandro, Cristiano; Delis, Ioannis; Nori, Francesco; Panzeri, Stefano; Berret, Bastien

    2013-01-01

    In this paper we review the works related to muscle synergies that have been carried-out in neuroscience and control engineering. In particular, we refer to the hypothesis that the central nervous system (CNS) generates desired muscle contractions by combining a small number of predefined modules, called muscle synergies. We provide an overview of the methods that have been employed to test the validity of this scheme, and we show how the concept of muscle synergy has been generalized for the control of artificial agents. The comparison between these two lines of research, in particular their different goals and approaches, is instrumental to explain the computational implications of the hypothesized modular organization. Moreover, it clarifies the importance of assessing the functional role of muscle synergies: although these basic modules are defined at the level of muscle activations (input-space), they should result in the effective accomplishment of the desired task. This requirement is not always explicitly considered in experimental neuroscience, as muscle synergies are often estimated solely by analyzing recorded muscle activities. We suggest that synergy extraction methods should explicitly take into account task execution variables, thus moving from a perspective purely based on input-space to one grounded on task-space as well.

  7. Learning feature representations with a cost-relevant sparse autoencoder.

    PubMed

    Längkvist, Martin; Loutfi, Amy

    2015-02-01

    There is an increasing interest in the machine learning community to automatically learn feature representations directly from the (unlabeled) data instead of using hand-designed features. The autoencoder is one method that can be used for this purpose. However, for data sets with a high degree of noise, a large amount of the representational capacity in the autoencoder is used to minimize the reconstruction error for these noisy inputs. This paper proposes a method that improves the feature learning process by focusing on the task relevant information in the data. This selective attention is achieved by weighting the reconstruction error and reducing the influence of noisy inputs during the learning process. The proposed model is trained on a number of publicly available image data sets and the test error rate is compared to a standard sparse autoencoder and other methods, such as the denoising autoencoder and contractive autoencoder.

  8. The ESA Space Environment Information System (SPENVIS)

    NASA Astrophysics Data System (ADS)

    Heynderickx, D.; Quaghebeur, B.; Evans, H. D. R.

    2002-01-01

    The ESA SPace ENVironment Information System (SPENVIS) provides standardized access to models of the hazardous space environment through a user-friendly WWW interface. The interface includes parameter input with extensive defaulting, definition of user environments, streamlined production of results (both in graphical and textual form), background information, and on-line help. It is available on-line at http://www.spenvis.oma.be/spenvis/. SPENVIS Is designed to help spacecraft engineers perform rapid analyses of environmental problems and, with extensive documentation and tutorial information, allows engineers with relatively little familiarity with the models to produce reliable results. It has been developed in response to the increasing pressure for rapid-response tools for system engineering, especially in low-cost commercial and educational programmes. It is very useful in conjunction with radiation effects and electrostatic charging testing in the context of hardness assurance. SPENVIS is based on internationally recognized standard models and methods in many domains. It uses an ESA-developed orbit generator to produce orbital point files necessary for many different types of problem. It has various reporting and graphical utilities, and extensive help facilities. The SPENVIS radiation module features models of the proton and electron radiation belts, as well as solar energetic particle and cosmic ray models. The particle spectra serve as input to models of ionising dose (SHIELDOSE), Non-Ionising Energy Loss (NIEL), and Single Event Upsets (CREME). Material shielding is taken into account for all these models, either as a set of user-defined shielding thicknesses, or in combination with a sectoring analysis that produces a shielding distribution from a geometric description of the satellite system. A sequence of models, from orbit generator to folding dose curves with a shielding distribution, can be run as one process, which minimizes user interaction and facilitates multiple runs with different orbital or shielding configurations. SPENVIS features a number of models and tools for evaluating spacecraft charging. The DERA DICTAT tool for evaluation of internal charging calculates the electron current that passes through a conductive shield and becomes deposited inside a dielectric, and predicts whether an electrostatic discharge will occur. SPENVIS has implemented the DERA EQUIPOT non-geometrical tool for assessing material susceptibility to charging in typical orbital environments, including polar and GEO environments. SPENVIS Also includes SOLARC, for assessment of the current collection and the floating potential of solar arrays in LEO. Finally, the system features access to data from surface charging events on CRRES and the Russian Gorizont spacecraft, in the form of spectrograms and double Maxwellian fit parameters. SPENVIS also contains an active, integrated version of the ECSS Space Environment Standard, and access to in-flight data. Apart from radiation and plasma environments, SPENVIS includes meteoroid and debris models, atmospheric models (including atomic oxygen), and magnetic field models implemented by means of the UNILIB library for magnetic coordinate evaluation, magnetic field line tracing and drift shell tracing. The UNILIB library is freely accessible from the Web (http://www.magnet.oma.be/unilib/) for downloading in the form of a Fortran object library for different platforms (DecAlpha, SunOS, HPUX and PC/MS-Windows).

  9. Feature extraction from high resolution satellite imagery as an input to the development and rapid update of a METRANS geographic information system (GIS).

    DOT National Transportation Integrated Search

    2011-06-01

    This report describes an accuracy assessment of extracted features derived from three : subsets of Quickbird pan-sharpened high resolution satellite image for the area of the : Port of Los Angeles, CA. Visual Learning Systems Feature Analyst and D...

  10. Rotation invariant features for wear particle classification

    NASA Astrophysics Data System (ADS)

    Arof, Hamzah; Deravi, Farzin

    1997-09-01

    This paper investigates the ability of a set of rotation invariant features to classify images of wear particles found in used lubricating oil of machinery. The rotation invariant attribute of the features is derived from the property of the magnitudes of Fourier transform coefficients that do not change with spatial shift of the input elements. By analyzing individual circular neighborhoods centered at every pixel in an image, local and global texture characteristics of an image can be described. A number of input sequences are formed by the intensities of pixels on concentric rings of various radii measured from the center of each neighborhood. Fourier transforming the sequences would generate coefficients whose magnitudes are invariant to rotation. Rotation invariant features extracted from these coefficients were utilized to classify wear particle images that were obtained from a number of different particles captured at different orientations. In an experiment involving images of 6 classes, the circular neighborhood features obtained a 91% recognition rate which compares favorably to a 76% rate achieved by features of a 6 by 6 co-occurrence matrix.

  11. Evaluation of touch-sensitive screen tablet terminal button size and spacing accounting for effect of fingertip contact angle.

    PubMed

    Nishimura, T; Doi, K; Fujimoto, H

    2015-08-01

    Touch-sensitive screen terminals enabling intuitive operation are used as input interfaces in a wide range of fields. Tablet terminals are one of the most common devices with a touch-sensitive screen. They have a feature of good portability, enabling use under various conditions. On the other hand, they require a GUI designed to prevent decrease of usability under various conditions. For example, the angle of fingertip contact with the display changes according to finger posture during operation and how the case is held. When a human fingertip makes contact with an object, the contact area between the fingertip and contact object increases or decreases as the contact angle changes. A touch-sensitive screen detects positions using the change in capacitance of the area touched by the fingertip; hence, differences in contact area between the touch-sensitive screen and fingertip resulting from different forefinger angles during operation could possibly affect operability. However, this effect has never been studied. We therefore conducted an experiment to investigate the relationship between size/spacing and operability, while taking the effect of fingertip contact angle into account. As a result, we have been able to specify the button size and spacing conditions that enable accurate and fast operation regardless of the forefinger contact angle.

  12. Fuzzy Control/Space Station automation

    NASA Technical Reports Server (NTRS)

    Gersh, Mark

    1990-01-01

    Viewgraphs on fuzzy control/space station automation are presented. Topics covered include: Space Station Freedom (SSF); SSF evolution; factors pointing to automation & robotics (A&R); astronaut office inputs concerning A&R; flight system automation and ground operations applications; transition definition program; and advanced automation software tools.

  13. A new method of building footprints detection using airborne laser scanning data and multispectral image

    NASA Astrophysics Data System (ADS)

    Luo, Yiping; Jiang, Ting; Gao, Shengli; Wang, Xin

    2010-10-01

    It presents a new approach for detecting building footprints in a combination of registered aerial image with multispectral bands and airborne laser scanning data synchronously obtained by Leica-Geosystems ALS40 and Applanix DACS-301 on the same platform. A two-step method for building detection was presented consisting of selecting 'building' candidate points and then classifying candidate points. A digital surface model(DSM) derived from last pulse laser scanning data was first filtered and the laser points were classified into classes 'ground' and 'building or tree' based on mathematic morphological filter. Then, 'ground' points were resample into digital elevation model(DEM), and a Normalized DSM(nDSM) was generated from DEM and DSM. The candidate points were selected from 'building or tree' points by height value and area threshold in nDSM. The candidate points were further classified into building points and tree points by using the support vector machines(SVM) classification method. Two classification tests were carried out using features only from laser scanning data and associated features from two input data sources. The features included height, height finite difference, RGB bands value, and so on. The RGB value of points was acquired by matching laser scanning data and image using collinear equation. The features of training points were presented as input data for SVM classification method, and cross validation was used to select best classification parameters. The determinant function could be constructed by the classification parameters and the class of candidate points was determined by determinant function. The result showed that associated features from two input data sources were superior to features only from laser scanning data. The accuracy of more than 90% was achieved for buildings in first kind of features.

  14. Functional magnetic resonance imaging activation detection: fuzzy cluster analysis in wavelet and multiwavelet domains.

    PubMed

    Jahanian, Hesamoddin; Soltanian-Zadeh, Hamid; Hossein-Zadeh, Gholam-Ali

    2005-09-01

    To present novel feature spaces, based on multiscale decompositions obtained by scalar wavelet and multiwavelet transforms, to remedy problems associated with high dimension of functional magnetic resonance imaging (fMRI) time series (when they are used directly in clustering algorithms) and their poor signal-to-noise ratio (SNR) that limits accurate classification of fMRI time series according to their activation contents. Using randomization, the proposed method finds wavelet/multiwavelet coefficients that represent the activation content of fMRI time series and combines them to define new feature spaces. Using simulated and experimental fMRI data sets, the proposed feature spaces are compared to the cross-correlation (CC) feature space and their performances are evaluated. In these studies, the false positive detection rate is controlled using randomization. To compare different methods, several points of the receiver operating characteristics (ROC) curves, using simulated data, are estimated and compared. The proposed features suppress the effects of confounding signals and improve activation detection sensitivity. Experimental results show improved sensitivity and robustness of the proposed method compared to the conventional CC analysis. More accurate and sensitive activation detection can be achieved using the proposed feature spaces compared to CC feature space. Multiwavelet features show superior detection sensitivity compared to the scalar wavelet features. (c) 2005 Wiley-Liss, Inc.

  15. The Comparison of Visual Working Memory Representations with Perceptual Inputs

    PubMed Central

    Hyun, Joo-seok; Woodman, Geoffrey F.; Vogel, Edward K.; Hollingworth, Andrew

    2008-01-01

    The human visual system can notice differences between memories of previous visual inputs and perceptions of new visual inputs, but the comparison process that detects these differences has not been well characterized. This study tests the hypothesis that differences between the memory of a stimulus array and the perception of a new array are detected in a manner that is analogous to the detection of simple features in visual search tasks. That is, just as the presence of a task-relevant feature in visual search can be detected in parallel, triggering a rapid shift of attention to the object containing the feature, the presence of a memory-percept difference along a task-relevant dimension can be detected in parallel, triggering a rapid shift of attention to the changed object. Supporting evidence was obtained in a series of experiments that examined manual reaction times, saccadic reaction times, and event-related potential latencies. However, these experiments also demonstrated that a slow, limited-capacity process must occur before the observer can make a manual change-detection response. PMID:19653755

  16. Optimizing Support Vector Machine Parameters with Genetic Algorithm for Credit Risk Assessment

    NASA Astrophysics Data System (ADS)

    Manurung, Jonson; Mawengkang, Herman; Zamzami, Elviawaty

    2017-12-01

    Support vector machine (SVM) is a popular classification method known to have strong generalization capabilities. SVM can solve the problem of classification and linear regression or nonlinear kernel which can be a learning algorithm for the ability of classification and regression. However, SVM also has a weakness that is difficult to determine the optimal parameter value. SVM calculates the best linear separator on the input feature space according to the training data. To classify data which are non-linearly separable, SVM uses kernel tricks to transform the data into a linearly separable data on a higher dimension feature space. The kernel trick using various kinds of kernel functions, such as : linear kernel, polynomial, radial base function (RBF) and sigmoid. Each function has parameters which affect the accuracy of SVM classification. To solve the problem genetic algorithms are proposed to be applied as the optimal parameter value search algorithm thus increasing the best classification accuracy on SVM. Data taken from UCI repository of machine learning database: Australian Credit Approval. The results show that the combination of SVM and genetic algorithms is effective in improving classification accuracy. Genetic algorithms has been shown to be effective in systematically finding optimal kernel parameters for SVM, instead of randomly selected kernel parameters. The best accuracy for data has been upgraded from kernel Linear: 85.12%, polynomial: 81.76%, RBF: 77.22% Sigmoid: 78.70%. However, for bigger data sizes, this method is not practical because it takes a lot of time.

  17. Contribution of sublinear and supralinear dendritic integration to neuronal computations

    PubMed Central

    Tran-Van-Minh, Alexandra; Cazé, Romain D.; Abrahamsson, Therése; Cathala, Laurence; Gutkin, Boris S.; DiGregorio, David A.

    2015-01-01

    Nonlinear dendritic integration is thought to increase the computational ability of neurons. Most studies focus on how supralinear summation of excitatory synaptic responses arising from clustered inputs within single dendrites result in the enhancement of neuronal firing, enabling simple computations such as feature detection. Recent reports have shown that sublinear summation is also a prominent dendritic operation, extending the range of subthreshold input-output (sI/O) transformations conferred by dendrites. Like supralinear operations, sublinear dendritic operations also increase the repertoire of neuronal computations, but feature extraction requires different synaptic connectivity strategies for each of these operations. In this article we will review the experimental and theoretical findings describing the biophysical determinants of the three primary classes of dendritic operations: linear, sublinear, and supralinear. We then review a Boolean algebra-based analysis of simplified neuron models, which provides insight into how dendritic operations influence neuronal computations. We highlight how neuronal computations are critically dependent on the interplay of dendritic properties (morphology and voltage-gated channel expression), spiking threshold and distribution of synaptic inputs carrying particular sensory features. Finally, we describe how global (scattered) and local (clustered) integration strategies permit the implementation of similar classes of computations, one example being the object feature binding problem. PMID:25852470

  18. Neural net target-tracking system using structured laser patterns

    NASA Astrophysics Data System (ADS)

    Cho, Jae-Wan; Lee, Yong-Bum; Lee, Nam-Ho; Park, Soon-Yong; Lee, Jongmin; Choi, Gapchu; Baek, Sunghyun; Park, Dong-Sun

    1996-06-01

    In this paper, we describe a robot endeffector tracking system using sensory information from recently-announced structured pattern laser diodes, which can generate images with several different types of structured pattern. The neural network approach is employed to recognize the robot endeffector covering the situation of three types of motion: translation, scaling and rotation. Features for the neural network to detect the position of the endeffector are extracted from the preprocessed images. Artificial neural networks are used to store models and to match with unknown input features recognizing the position of the robot endeffector. Since a minimal number of samples are used for different directions of the robot endeffector in the system, an artificial neural network with the generalization capability can be utilized for unknown input features. A feedforward neural network with the generalization capability can be utilized for unknown input features. A feedforward neural network trained with the back propagation learning is used to detect the position of the robot endeffector. Another feedforward neural network module is used to estimate the motion from a sequence of images and to control movements of the robot endeffector. COmbining the tow neural networks for recognizing the robot endeffector and estimating the motion with the preprocessing stage, the whole system keeps tracking of the robot endeffector effectively.

  19. Preprocessing for Eddy Dissipation Rate and TKE Profile Generation

    NASA Technical Reports Server (NTRS)

    Zak, J. Allen; Rodgers, William G., Jr.; McKissick, Burnell T. (Technical Monitor)

    2001-01-01

    The Aircraft Vortex Spacing System (AVOSS), a set of algorithms to determine aircraft spacing according to wake vortex behavior prediction, requires turbulence profiles to appropriately determine arrival and departure aircraft spacing. The ambient atmospheric turbulence profile must always be produced, even if the result is an arbitrary (canned) profile. The original turbulence profile code was generated By North Carolina State University and used in a non-real-time environment in the past. All the input parameters could be carefully selected and screened prior to input. Since this code must run in real-time using actual measurements in the field as input, it became imperative to begin a data checking and screening process as part of the real-time implementation. The process described herein is a step towards ensuring that the best possible turbulence profile is always provided to AVOSS. Data fill-ins, constant profiles and arbitrary profiles are used only as a last resort, but are essential to ensure uninterrupted application of AVOSS.

  20. Event-related potentials reveal the relations between feature representations at different levels of abstraction.

    PubMed

    Hannah, Samuel D; Shedden, Judith M; Brooks, Lee R; Grundy, John G

    2016-11-01

    In this paper, we use behavioural methods and event-related potentials (ERPs) to explore the relations between informational and instantiated features, as well as the relation between feature abstraction and rule type. Participants are trained to categorize two species of fictitious animals and then identify perceptually novel exemplars. Critically, two groups are given a perfectly predictive counting rule that, according to Hannah and Brooks (2009. Featuring familiarity: How a familiar feature instantiation influences categorization. Canadian Journal of Experimental Psychology/Revue Canadienne de Psychologie Expérimentale, 63, 263-275. Retrieved from http://doi.org/10.1037/a0017919), should orient them to using abstract informational features when categorizing the novel transfer items. A third group is taught a feature list rule, which should orient them to using detailed instantiated features. One counting-rule group were taught their rule before any exposure to the actual stimuli, and the other immediately after training, having learned the instantiations first. The feature-list group were also taught their rule after training. The ERP results suggest that at test, the two counting-rule groups processed items differently, despite their identical rule. This not only supports the distinction that informational and instantiated features are qualitatively different feature representations, but also implies that rules can readily operate over concrete inputs, in contradiction to traditional approaches that assume that rules necessarily act on abstract inputs.

  1. A general prediction model for the detection of ADHD and Autism using structural and functional MRI.

    PubMed

    Sen, Bhaskar; Borle, Neil C; Greiner, Russell; Brown, Matthew R G

    2018-01-01

    This work presents a novel method for learning a model that can diagnose Attention Deficit Hyperactivity Disorder (ADHD), as well as Autism, using structural texture and functional connectivity features obtained from 3-dimensional structural magnetic resonance imaging (MRI) and 4-dimensional resting-state functional magnetic resonance imaging (fMRI) scans of subjects. We explore a series of three learners: (1) The LeFMS learner first extracts features from the structural MRI images using the texture-based filters produced by a sparse autoencoder. These filters are then convolved with the original MRI image using an unsupervised convolutional network. The resulting features are used as input to a linear support vector machine (SVM) classifier. (2) The LeFMF learner produces a diagnostic model by first computing spatial non-stationary independent components of the fMRI scans, which it uses to decompose each subject's fMRI scan into the time courses of these common spatial components. These features can then be used with a learner by themselves or in combination with other features to produce the model. Regardless of which approach is used, the final set of features are input to a linear support vector machine (SVM) classifier. (3) Finally, the overall LeFMSF learner uses the combined features obtained from the two feature extraction processes in (1) and (2) above as input to an SVM classifier, achieving an accuracy of 0.673 on the ADHD-200 holdout data and 0.643 on the ABIDE holdout data. Both of these results, obtained with the same LeFMSF framework, are the best known, over all hold-out accuracies on these datasets when only using imaging data-exceeding previously-published results by 0.012 for ADHD and 0.042 for Autism. Our results show that combining multi-modal features can yield good classification accuracy for diagnosis of ADHD and Autism, which is an important step towards computer-aided diagnosis of these psychiatric diseases and perhaps others as well.

  2. Temporal and spatial variability in thalweg profiles of a gravel-bed river

    USGS Publications Warehouse

    Madej, Mary Ann

    1999-01-01

    This study used successive longitudinal thalweg profiles in gravel-bed rivers to monitor changes in bed topography following floods and associated large sediment inputs. Variations in channel bed elevations, distributions of residual water depths, percentage of channel length occupied by riffles, and a spatial autocorrelation coefficient (Moran's I) were used to quantify changes in morphological diversity and spatial structure in Redwood Creek basin, northwestern California. Bed topography in Redwood Creek and its major tributaries consists primarily of a series of pools and riffles. The size, frequency and spatial distribution of the pools and riffles have changed significantly during the past 20 years. Following large floods and high sediment input in Redwood Creek and its tributaries in 1975, variation in channel bed elevations was low and the percentage of the channel length occupied by riffles was high. Over the next 20 years, variation in bed elevations increased while the length of channel occupied by riffles decreased. An index [(standard deviation of residual water depth/bankfull depth) × 100] was developed to compare variations in bed elevation over a range of stream sizes, with a higher index being indicative of greater morphological diversity. Spatial autocorrelation in the bed elevation data was apparent at both fine and coarse scales in many of the thalweg profiles and the observed spatial pattern of bed elevations was found to be related to the dominant channel material and the time since disturbance. River reaches in which forced pools dominated, and in which large woody debris and bed particles could not be easily mobilized, exhibited a random distribution of bed elevations. In contrast, in reaches where alternate bars dominated, and both wood and gravel were readily transported, regularly spaced bed topography developed at a spacing that increased with time since disturbance. This pattern of regularly spaced bed features was reversed following a 12-year flood when bed elevations became more randomly arranged.

  3. 78 FR 32241 - U.S. Air Force Seeks Industry Input for National Security Space Launch Assessment

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-05-29

    ... Security Space Launch Assessment AGENCY: Office of the Deputy Under Secretary of the Air Force for Space... that the United States Air Force, Office of the Deputy Under Secretary of the Air Force for Space.... Robert Long, 703-693-4978, Office of the Deputy Under Secretary of the Air Force for Space, 1670 Air...

  4. Sample-space-based feature extraction and class preserving projection for gene expression data.

    PubMed

    Wang, Wenjun

    2013-01-01

    In order to overcome the problems of high computational complexity and serious matrix singularity for feature extraction using Principal Component Analysis (PCA) and Fisher's Linear Discrinimant Analysis (LDA) in high-dimensional data, sample-space-based feature extraction is presented, which transforms the computation procedure of feature extraction from gene space to sample space by representing the optimal transformation vector with the weighted sum of samples. The technique is used in the implementation of PCA, LDA, Class Preserving Projection (CPP) which is a new method for discriminant feature extraction proposed, and the experimental results on gene expression data demonstrate the effectiveness of the method.

  5. Space Mission Operations Concept

    NASA Technical Reports Server (NTRS)

    Squibb, Gael F.

    1996-01-01

    This paper will discuss the concept of developing a space mission operations concept; the benefits of starting this system engineering task early; the neccessary inputs to the process; and the products that are generated.

  6. Effects of various experimental parameters on errors in triangulation solution of elongated object in space. [barium ion cloud

    NASA Technical Reports Server (NTRS)

    Long, S. A. T.

    1975-01-01

    The effects of various experimental parameters on the displacement errors in the triangulation solution of an elongated object in space due to pointing uncertainties in the lines of sight have been determined. These parameters were the number and location of observation stations, the object's location in latitude and longitude, and the spacing of the input data points on the azimuth-elevation image traces. The displacement errors due to uncertainties in the coordinates of a moving station have been determined as functions of the number and location of the stations. The effects of incorporating the input data from additional cameras at one of the stations were also investigated.

  7. Grey-box state-space identification of nonlinear mechanical vibrations

    NASA Astrophysics Data System (ADS)

    Noël, J. P.; Schoukens, J.

    2018-05-01

    The present paper deals with the identification of nonlinear mechanical vibrations. A grey-box, or semi-physical, nonlinear state-space representation is introduced, expressing the nonlinear basis functions using a limited number of measured output variables. This representation assumes that the observed nonlinearities are localised in physical space, which is a generic case in mechanics. A two-step identification procedure is derived for the grey-box model parameters, integrating nonlinear subspace initialisation and weighted least-squares optimisation. The complete procedure is applied to an electrical circuit mimicking the behaviour of a single-input, single-output (SISO) nonlinear mechanical system and to a single-input, multiple-output (SIMO) geometrically nonlinear beam structure.

  8. Thermal and orbital analysis of Earth monitoring Sun-synchronous space experiments

    NASA Technical Reports Server (NTRS)

    Killough, Brian D.

    1990-01-01

    The fundamentals of an Earth monitoring Sun-synchronous orbit are presented. A Sun-synchronous Orbit Analysis Program (SOAP) was developed to calculate orbital parameters for an entire year. The output from this program provides the required input data for the TRASYS thermal radiation computer code, which in turn computes the infrared, solar and Earth albedo heat fluxes incident on a space experiment. Direct incident heat fluxes can be used as input to a generalized thermal analyzer program to size radiators and predict instrument operating temperatures. The SOAP computer code and its application to the thermal analysis methodology presented, should prove useful to the thermal engineer during the design phases of Earth monitoring Sun-synchronous space experiments.

  9. Is that a belt or a snake? object attentional selection affects the early stages of visual sensory processing

    PubMed Central

    2012-01-01

    Background There is at present crescent empirical evidence deriving from different lines of ERPs research that, unlike previously observed, the earliest sensory visual response, known as C1 component or P/N80, generated within the striate cortex, might be modulated by selective attention to visual stimulus features. Up to now, evidence of this modulation has been related to space location, and simple features such as spatial frequency, luminance, and texture. Additionally, neurophysiological conditions, such as emotion, vigilance, the reflexive or voluntary nature of input attentional selection, and workload have also been related to C1 modulations, although at least the workload status has received controversial indications. No information is instead available, at present, for objects attentional selection. Methods In this study object- and space-based attention mechanisms were conjointly investigated by presenting complex, familiar shapes of artefacts and animals, intermixed with distracters, in different tasks requiring the selection of a relevant target-category within a relevant spatial location, while ignoring the other shape categories within this location, and, overall, all the categories at an irrelevant location. EEG was recorded from 30 scalp electrode sites in 21 right-handed participants. Results and Conclusions ERP findings showed that visual processing was modulated by both shape- and location-relevance per se, beginning separately at the latency of the early phase of a precocious negativity (60-80 ms) at mesial scalp sites consistent with the C1 component, and a positivity at more lateral sites. The data also showed that the attentional modulation progressed conjointly at the latency of the subsequent P1 (100-120 ms) and N1 (120-180 ms), as well as later-latency components. These findings support the views that (1) V1 may be precociously modulated by direct top-down influences, and participates to object, besides simple features, attentional selection; (2) object spatial and non-spatial features selection might begin with an early, parallel detection of a target object in the visual field, followed by the progressive focusing of spatial attention onto the location of an actual target for its identification, somehow in line with neural mechanisms reported in the literature as "object-based space selection", or with those proposed for visual search. PMID:22300540

  10. Portable data collection device

    DOEpatents

    French, P.D.

    1996-06-11

    The present invention provides a portable data collection device that has a variety of sensors that are interchangeable with a variety of input ports in the device. The various sensors include a data identification feature that provides information to the device regarding the type of physical data produced by each sensor and therefore the type of sensor itself. The data identification feature enables the device to locate the input port where the sensor is connected and self adjust when a sensor is removed or replaced. The device is able to collect physical data, whether or not a function of a time. 7 figs.

  11. Portable data collection device

    DOEpatents

    French, Patrick D.

    1996-01-01

    The present invention provides a portable data collection device that has a variety of sensors that are interchangeable with a variety of input ports in the device. The various sensors include a data identification feature that provides information to the device regarding the type of physical data produced by each sensor and therefore the type of sensor itself. The data identification feature enables the device to locate the input port where the sensor is connected and self adjust when a sensor is removed or replaced. The device is able to collect physical data, whether or not a function of a time.

  12. Local Feature Selection for Data Classification.

    PubMed

    Armanfard, Narges; Reilly, James P; Komeili, Majid

    2016-06-01

    Typical feature selection methods choose an optimal global feature subset that is applied over all regions of the sample space. In contrast, in this paper we propose a novel localized feature selection (LFS) approach whereby each region of the sample space is associated with its own distinct optimized feature set, which may vary both in membership and size across the sample space. This allows the feature set to optimally adapt to local variations in the sample space. An associated method for measuring the similarities of a query datum to each of the respective classes is also proposed. The proposed method makes no assumptions about the underlying structure of the samples; hence the method is insensitive to the distribution of the data over the sample space. The method is efficiently formulated as a linear programming optimization problem. Furthermore, we demonstrate the method is robust against the over-fitting problem. Experimental results on eleven synthetic and real-world data sets demonstrate the viability of the formulation and the effectiveness of the proposed algorithm. In addition we show several examples where localized feature selection produces better results than a global feature selection method.

  13. Using neural networks to assist in OPAD data analysis

    NASA Technical Reports Server (NTRS)

    Whitaker, Kevin W.

    1993-01-01

    The space shuttle main engine (SSME) became the subject of plume emission spectroscopy in 1986 when researchers from NASA-Marshall Space Flight Center (MSFC), Arnold Engineering Development Center (AEDC), and Rocketdyne went to the SSME test stands at the NASA-Stennis Space Center and at Rocketdyne's Santa Susan Field Laboratory to optically observe the plume. Since then, plume spectral acquisitions have recorded many nominal tests and the qualitative spectral features of the SSME plume are now well established. Significant discoveries made with both wide-band and narrow-band plume emission spectroscopy systems led MSFC to promote the Optical Plume Anomaly Detection (OPAD) program with a goal of instrumenting all SSME test stands with customized spectrometer systems. A prototype OPAD system is now installed on the SSME Technology Test Bed (TTB) at MSFC. The OPAD system instrumentation consists of a broad-band, optical multiple-channel analyzer (OMA) and a narrow-band device called a polychrometer. The OMA is a high-resolution (1.5-2.0 Angstroms) 'super-spectrometer' covering the near-ultraviolet to near-infrared waveband (2800-7400 Angstroms), providing two scans per second. The polychrometer consists of sixteen narrow-band radiometers: fourteen monitoring discrete wavelengths of health and condition monitoring elements and two dedicated to monitoring background emissions. All sixteen channels are capable of providing 500 samples per second. To date, the prototype OPAD system has been used during 43 SSME firings on the TTB, collecting well over 250 megabytes of plume spectral data. One goal of OPAD data analysis is to determine interatively with the help of a computer code, SPECTRA4, developed at AEDC. Experience has shown that iteration with SPECTRA4 is an incredibly labor intensive task and not one to be performed by band. What is really needed is the 'inverse' of SPECTRA4 but the mathematical model for this inverse mapping is tenuous at best. However, the robustness of PSECTRA4 run in the 'forward' direction means that accurate input/output mappings can be obtained. If the mappings were inverted (i.e., input becomes output and output becomes input) then an 'inverse' of SPECTRA4 would be at hand but the 'model' would be specific to the data utilized and would in no way be general. Building a generalized model based upon known input/output mappings while ignoring the details of the governing physical model is possible through the use of a neural network. The research investigation described involves the development of a neural network to provide a generalized 'inverse' of SPECTRA4. The objectives of the research were to design an appropriate neural network architecture, train the network, and then evaluate its performance.

  14. Forebrain pathway for auditory space processing in the barn owl.

    PubMed

    Cohen, Y E; Miller, G L; Knudsen, E I

    1998-02-01

    The forebrain plays an important role in many aspects of sound localization behavior. Yet, the forebrain pathway that processes auditory spatial information is not known for any species. Using standard anatomic labeling techniques, we used a "top-down" approach to trace the flow of auditory spatial information from an output area of the forebrain sound localization pathway (the auditory archistriatum, AAr), back through the forebrain, and into the auditory midbrain. Previous work has demonstrated that AAr units are specialized for auditory space processing. The results presented here show that the AAr receives afferent input from Field L both directly and indirectly via the caudolateral neostriatum. Afferent input to Field L originates mainly in the auditory thalamus, nucleus ovoidalis, which, in turn, receives input from the central nucleus of the inferior colliculus. In addition, we confirmed previously reported projections of the AAr to the basal ganglia, the external nucleus of the inferior colliculus (ICX), the deep layers of the optic tectum, and various brain stem nuclei. A series of inactivation experiments demonstrated that the sharp tuning of AAr sites for binaural spatial cues depends on Field L input but not on input from the auditory space map in the midbrain ICX: pharmacological inactivation of Field L eliminated completely auditory responses in the AAr, whereas bilateral ablation of the midbrain ICX had no appreciable effect on AAr responses. We conclude, therefore, that the forebrain sound localization pathway can process auditory spatial information independently of the midbrain localization pathway.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ma, Ying; Li, Hong; Bridges, Denzel

    We report that the continuing miniaturization of microelectronics is pushing advanced manufacturing into nanomanufacturing. Nanojoining is a bottom-up assembly technique that enables functional nanodevice fabrication with dissimilar nanoscopic building blocks and/or molecular components. Various conventional joining techniques have been modified and re-invented for joining nanomaterials. Our review surveys recent progress in nanojoining methods, as compared to conventional joining processes. Examples of nanojoining are given and classified by the dimensionality of the joining materials. At each classification, nanojoining is reviewed and discussed according to materials specialties, low dimensional processing features, energy input mechanisms and potential applications. The preparation of new intermetallicmore » materials by reactive nanoscale multilayer foils based on self-propagating high-temperature synthesis is highlighted. This review will provide insight into nanojoining fundamentals and innovative applications in power electronics packaging, plasmonic devices, nanosoldering for printable electronics, 3D printing and space manufacturing.« less

  16. Air Evaporation closed cycle water recovery technology - Advanced energy saving designs

    NASA Technical Reports Server (NTRS)

    Morasko, Gwyndolyn; Putnam, David F.; Bagdigian, Robert

    1986-01-01

    The Air Evaporation water recovery system is a visible candidate for Space Station application. A four-man Air Evaporation open cycle system has been successfully demonstrated for waste water recovery in manned chamber tests. The design improvements described in this paper greatly enhance the system operation and energy efficiency of the air evaporation process. A state-of-the-art wick feed design which results in reduced logistics requirements is presented. In addition, several design concepts that incorporate regenerative features to minimize the energy input to the system are discussed. These include a recuperative heat exchanger, a heat pump for energy transfer to the air heater, and solar collectors for evaporative heat. The addition of the energy recovery devices will result in an energy reduction of more than 80 percent over the systems used in earlier manned chamber tests.

  17. Auroral photometry from the atmosphere Explorer satellite

    NASA Technical Reports Server (NTRS)

    Rees, M. H.; Abreu, V. J.

    1984-01-01

    Attention is given to the ability of remote sensing from space to yield quantitative auroral and ionospheric parametrers, in view of the auroral measurements made during two passes of the Explorer C satellite over the Poker Flat Optical Observatory and the Chatanika Radar Facility. The emission rate of the N2(+) 4278 A band computed from intensity measurements of energetic auroral electrons has tracked the same spetral feature that was measured remotely from the satellite over two decades of intensity, providing a stringent test for the measurement of atmospheric scattering effects. It also verifies the absolute intensity with respect to ground-based photometric measurements. In situ satellite measurments of ion densities and ground based electron density profile radar measurements provide a consistent picture of the ionospheric response to auroral input, while also predicting the observed optical emission rate.

  18. Automated Glacier Surface Velocity using Multi-Image/Multi-Chip (MIMC) Feature Tracking

    NASA Astrophysics Data System (ADS)

    Ahn, Y.; Howat, I. M.

    2009-12-01

    Remote sensing from space has enabled effective monitoring of remote and inhospitable polar regions. Glacier velocity, and its variation in time, is one of the most important parameters needed to understand glacier dynamics, glacier mass balance and contribution to sea level rise. Regular measurements of ice velocity are possible from large and accessible satellite data set archives, such as ASTER and LANDSAT-7. Among satellite imagery, optical imagery (i.e. passive, visible to near-infrared band sensors) provides abundant data with optimal spatial resolution and repeat interval for tracking glacier motion at high temporal resolution. Due to massive amounts of data, computation of ice velocity from feature tracking requires 1) user-friendly interface, 2) minimum local/user parameter inputs and 3) results that need minimum editing. We focus on robust feature tracking, applicable to all currently available optical satellite imagery, that is ASTER, SPOT and LANDSAT etc. We introduce the MIMC (multiple images/multiple chip sizes) matching approach that does not involve any user defined local/empirical parameters except approximate average glacier speed. We also introduce a method for extracting velocity from LANDSAT-7 SLC-off data, which has 22 percent of scene data missing in slanted strips due to failure of the scan line corrector. We apply our approach to major outlet glaciers in west/east Greenland and assess our MIMC feature tracking technique by comparison with conventional correlation matching and other methods (e.g. InSAR).

  19. Efficient Screening of Climate Model Sensitivity to a Large Number of Perturbed Input Parameters [plus supporting information

    DOE PAGES

    Covey, Curt; Lucas, Donald D.; Tannahill, John; ...

    2013-07-01

    Modern climate models contain numerous input parameters, each with a range of possible values. Since the volume of parameter space increases exponentially with the number of parameters N, it is generally impossible to directly evaluate a model throughout this space even if just 2-3 values are chosen for each parameter. Sensitivity screening algorithms, however, can identify input parameters having relatively little effect on a variety of output fields, either individually or in nonlinear combination.This can aid both model development and the uncertainty quantification (UQ) process. Here we report results from a parameter sensitivity screening algorithm hitherto untested in climate modeling,more » the Morris one-at-a-time (MOAT) method. This algorithm drastically reduces the computational cost of estimating sensitivities in a high dimensional parameter space because the sample size grows linearly rather than exponentially with N. It nevertheless samples over much of the N-dimensional volume and allows assessment of parameter interactions, unlike traditional elementary one-at-a-time (EOAT) parameter variation. We applied both EOAT and MOAT to the Community Atmosphere Model (CAM), assessing CAM’s behavior as a function of 27 uncertain input parameters related to the boundary layer, clouds, and other subgrid scale processes. For radiation balance at the top of the atmosphere, EOAT and MOAT rank most input parameters similarly, but MOAT identifies a sensitivity that EOAT underplays for two convection parameters that operate nonlinearly in the model. MOAT’s ranking of input parameters is robust to modest algorithmic variations, and it is qualitatively consistent with model development experience. Supporting information is also provided at the end of the full text of the article.« less

  20. The Face-Processing Network Is Resilient to Focal Resection of Human Visual Cortex

    PubMed Central

    Jonas, Jacques; Gomez, Jesse; Maillard, Louis; Brissart, Hélène; Hossu, Gabriela; Jacques, Corentin; Loftus, David; Colnat-Coulbois, Sophie; Stigliani, Anthony; Barnett, Michael A.; Grill-Spector, Kalanit; Rossion, Bruno

    2016-01-01

    Human face perception requires a network of brain regions distributed throughout the occipital and temporal lobes with a right hemisphere advantage. Present theories consider this network as either a processing hierarchy beginning with the inferior occipital gyrus (occipital face area; IOG-faces/OFA) or a multiple-route network with nonhierarchical components. The former predicts that removing IOG-faces/OFA will detrimentally affect downstream stages, whereas the latter does not. We tested this prediction in a human patient (Patient S.P.) requiring removal of the right inferior occipital cortex, including IOG-faces/OFA. We acquired multiple fMRI measurements in Patient S.P. before and after a preplanned surgery and multiple measurements in typical controls, enabling both within-subject/across-session comparisons (Patient S.P. before resection vs Patient S.P. after resection) and between-subject/across-session comparisons (Patient S.P. vs controls). We found that the spatial topology and selectivity of downstream ipsilateral face-selective regions were stable 1 and 8 month(s) after surgery. Additionally, the reliability of distributed patterns of face selectivity in Patient S.P. before versus after resection was not different from across-session reliability in controls. Nevertheless, postoperatively, representations of visual space were typical in dorsal face-selective regions but atypical in ventral face-selective regions and V1 of the resected hemisphere. Diffusion weighted imaging in Patient S.P. and controls identifies white matter tracts connecting retinotopic areas to downstream face-selective regions, which may contribute to the stable and plastic features of the face network in Patient S.P. after surgery. Together, our results support a multiple-route network of face processing with nonhierarchical components and shed light on stable and plastic features of high-level visual cortex following focal brain damage. SIGNIFICANCE STATEMENT Brain networks consist of interconnected functional regions commonly organized in processing hierarchies. Prevailing theories predict that damage to the input of the hierarchy will detrimentally affect later stages. We tested this prediction with multiple brain measurements in a rare human patient requiring surgical removal of the putative input to a network processing faces. Surprisingly, the spatial topology and selectivity of downstream face-selective regions are stable after surgery. Nevertheless, representations of visual space were typical in dorsal face-selective regions but atypical in ventral face-selective regions and V1. White matter connections from outside the face network may support these stable and plastic features. As processing hierarchies are ubiquitous in biological and nonbiological systems, our results have pervasive implications for understanding the construction of resilient networks. PMID:27511014

  1. Generalized compliant motion primitive

    NASA Technical Reports Server (NTRS)

    Backes, Paul G. (Inventor)

    1994-01-01

    This invention relates to a general primitive for controlling a telerobot with a set of input parameters. The primitive includes a trajectory generator; a teleoperation sensor; a joint limit generator; a force setpoint generator; a dither function generator, which produces telerobot motion inputs in a common coordinate frame for simultaneous combination in sensor summers. Virtual return spring motion input is provided by a restoration spring subsystem. The novel features of this invention include use of a single general motion primitive at a remote site to permit the shared and supervisory control of the robot manipulator to perform tasks via a remotely transferred input parameter set.

  2. Automated Knowledge Discovery From Simulators

    NASA Technical Reports Server (NTRS)

    Burl, Michael; DeCoste, Dennis; Mazzoni, Dominic; Scharenbroich, Lucas; Enke, Brian; Merline, William

    2007-01-01

    A computational method, SimLearn, has been devised to facilitate efficient knowledge discovery from simulators. Simulators are complex computer programs used in science and engineering to model diverse phenomena such as fluid flow, gravitational interactions, coupled mechanical systems, and nuclear, chemical, and biological processes. SimLearn uses active-learning techniques to efficiently address the "landscape characterization problem." In particular, SimLearn tries to determine which regions in "input space" lead to a given output from the simulator, where "input space" refers to an abstraction of all the variables going into the simulator, e.g., initial conditions, parameters, and interaction equations. Landscape characterization can be viewed as an attempt to invert the forward mapping of the simulator and recover the inputs that produce a particular output. Given that a single simulation run can take days or weeks to complete even on a large computing cluster, SimLearn attempts to reduce costs by reducing the number of simulations needed to effect discoveries. Unlike conventional data-mining methods that are applied to static predefined datasets, SimLearn involves an iterative process in which a most informative dataset is constructed dynamically by using the simulator as an oracle. On each iteration, the algorithm models the knowledge it has gained through previous simulation trials and then chooses which simulation trials to run next. Running these trials through the simulator produces new data in the form of input-output pairs. The overall process is embodied in an algorithm that combines support vector machines (SVMs) with active learning. SVMs use learning from examples (the examples are the input-output pairs generated by running the simulator) and a principle called maximum margin to derive predictors that generalize well to new inputs. In SimLearn, the SVM plays the role of modeling the knowledge that has been gained through previous simulation trials. Active learning is used to determine which new input points would be most informative if their output were known. The selected input points are run through the simulator to generate new information that can be used to refine the SVM. The process is then repeated. SimLearn carefully balances exploration (semi-randomly searching around the input space) versus exploitation (using the current state of knowledge to conduct a tightly focused search). During each iteration, SimLearn uses not one, but an ensemble of SVMs. Each SVM in the ensemble is characterized by different hyper-parameters that control various aspects of the learned predictor - for example, whether the predictor is constrained to be very smooth (nearby points in input space lead to similar output predictions) or whether the predictor is allowed to be "bumpy." The various SVMs will have different preferences about which input points they would like to run through the simulator next. SimLearn includes a formal mechanism for balancing the ensemble SVM preferences so that a single choice can be made for the next set of trials.

  3. Similarity-dissimilarity plot for visualization of high dimensional data in biomedical pattern classification.

    PubMed

    Arif, Muhammad

    2012-06-01

    In pattern classification problems, feature extraction is an important step. Quality of features in discriminating different classes plays an important role in pattern classification problems. In real life, pattern classification may require high dimensional feature space and it is impossible to visualize the feature space if the dimension of feature space is greater than four. In this paper, we have proposed a Similarity-Dissimilarity plot which can project high dimensional space to a two dimensional space while retaining important characteristics required to assess the discrimination quality of the features. Similarity-dissimilarity plot can reveal information about the amount of overlap of features of different classes. Separable data points of different classes will also be visible on the plot which can be classified correctly using appropriate classifier. Hence, approximate classification accuracy can be predicted. Moreover, it is possible to know about whom class the misclassified data points will be confused by the classifier. Outlier data points can also be located on the similarity-dissimilarity plot. Various examples of synthetic data are used to highlight important characteristics of the proposed plot. Some real life examples from biomedical data are also used for the analysis. The proposed plot is independent of number of dimensions of the feature space.

  4. Evaluation of the ACEC Benchmark Suite for Real-Time Applications

    DTIC Science & Technology

    1990-07-23

    1.0 benchmark suite waSanalyzed with respect to its measuring of Ada real-time features such as tasking, memory management, input/output, scheduling...and delay statement, Chapter 13 features , pragmas, interrupt handling, subprogram overhead, numeric computations etc. For most of the features that...meant for programming real-time systems. The ACEC benchmarks have been analyzed extensively with respect to their measuring of Ada real-time features

  5. Monte Carlo simulation for uncertainty estimation on structural data in implicit 3-D geological modeling, a guide for disturbance distribution selection and parameterization

    NASA Astrophysics Data System (ADS)

    Pakyuz-Charrier, Evren; Lindsay, Mark; Ogarko, Vitaliy; Giraud, Jeremie; Jessell, Mark

    2018-04-01

    Three-dimensional (3-D) geological structural modeling aims to determine geological information in a 3-D space using structural data (foliations and interfaces) and topological rules as inputs. This is necessary in any project in which the properties of the subsurface matters; they express our understanding of geometries in depth. For that reason, 3-D geological models have a wide range of practical applications including but not restricted to civil engineering, the oil and gas industry, the mining industry, and water management. These models, however, are fraught with uncertainties originating from the inherent flaws of the modeling engines (working hypotheses, interpolator's parameterization) and the inherent lack of knowledge in areas where there are no observations combined with input uncertainty (observational, conceptual and technical errors). Because 3-D geological models are often used for impactful decision-making it is critical that all 3-D geological models provide accurate estimates of uncertainty. This paper's focus is set on the effect of structural input data measurement uncertainty propagation in implicit 3-D geological modeling. This aim is achieved using Monte Carlo simulation for uncertainty estimation (MCUE), a stochastic method which samples from predefined disturbance probability distributions that represent the uncertainty of the original input data set. MCUE is used to produce hundreds to thousands of altered unique data sets. The altered data sets are used as inputs to produce a range of plausible 3-D models. The plausible models are then combined into a single probabilistic model as a means to propagate uncertainty from the input data to the final model. In this paper, several improved methods for MCUE are proposed. The methods pertain to distribution selection for input uncertainty, sample analysis and statistical consistency of the sampled distribution. Pole vector sampling is proposed as a more rigorous alternative than dip vector sampling for planar features and the use of a Bayesian approach to disturbance distribution parameterization is suggested. The influence of incorrect disturbance distributions is discussed and propositions are made and evaluated on synthetic and realistic cases to address the sighted issues. The distribution of the errors of the observed data (i.e., scedasticity) is shown to affect the quality of prior distributions for MCUE. Results demonstrate that the proposed workflows improve the reliability of uncertainty estimation and diminish the occurrence of artifacts.

  6. Space sickness predictors suggest fluid shift involvement and possible countermeasures

    NASA Technical Reports Server (NTRS)

    Simanonok, K. E.; Moseley, E. C.; Charles, J. B.

    1992-01-01

    Preflight data from 64 first time Shuttle crew members were examined retrospectively to predict space sickness severity (NONE, MILD, MODERATE, or SEVERE) by discriminant analysis. From 9 input variables relating to fluid, electrolyte, and cardiovascular status, 8 variables were chosen by discriminant analysis that correctly predicted space sickness severity with 59 pct. success by one method of cross validation on the original sample and 67 pct. by another method. The 8 variables in order of their importance for predicting space sickness severity are sitting systolic blood pressure, serum uric acid, calculated blood volume, serum phosphate, urine osmolality, environmental temperature at the launch site, red cell count, and serum chloride. These results suggest the presence of predisposing physiologic factors to space sickness that implicate a fluid shift etiology. Addition of a 10th input variable, hours spent in the Weightless Environment Training Facility (WETF), improved the prediction of space sickness severity to 66 pct. success by the first method of cross validation on the original sample and to 71 pct. by the second method. The data suggest that WETF training may reduce space sickness severity.

  7. Space Radiation and Manned Mission: Interface Between Physics and Biology

    NASA Astrophysics Data System (ADS)

    Hei, Tom

    2012-07-01

    The natural radiation environment in space consists of a mixed field of high energy protons, heavy ions, electrons and alpha particles. Interplanetary travel to the International Space Station and any planned establishment of satellite colonies on other solar system implies radiation exposure to the crew and is a major concern to space agencies. With shielding, the radiation exposure level in manned space missions is likely to be chronic, low dose irradiation. Traditionally, our knowledge of biological effects of cosmic radiation in deep space is almost exclusively derived from ground-based accelerator experiments with heavy ions in animal or in vitro models. Radiobiological effects of low doses of ionizing radiation are subjected to modulations by various parameters including bystander effects, adaptive response, genomic instability and genetic susceptibility of the exposed individuals. Radiation dosimetry and modeling will provide conformational input in areas where data are difficult to acquire experimentally. However, modeling is only as good as the quality of input data. This lecture will discuss the interdependent nature of physics and biology in assessing the radiobiological response to space radiation.

  8. Neural correlates of processing facial identity based on features versus their spacing.

    PubMed

    Maurer, D; O'Craven, K M; Le Grand, R; Mondloch, C J; Springer, M V; Lewis, T L; Grady, C L

    2007-04-08

    Adults' expertise in recognizing facial identity involves encoding subtle differences among faces in the shape of individual facial features (featural processing) and in the spacing among features (a type of configural processing called sensitivity to second-order relations). We used fMRI to investigate the neural mechanisms that differentiate these two types of processing. Participants made same/different judgments about pairs of faces that differed only in the shape of the eyes and mouth, with minimal differences in spacing (featural blocks), or pairs of faces that had identical features but differed in the positions of those features (spacing blocks). From a localizer scan with faces, objects, and houses, we identified regions with comparatively more activity for faces, including the fusiform face area (FFA) in the right fusiform gyrus, other extrastriate regions, and prefrontal cortices. Contrasts between the featural and spacing conditions revealed distributed patterns of activity differentiating the two conditions. A region of the right fusiform gyrus (near but not overlapping the localized FFA) showed greater activity during the spacing task, along with multiple areas of right frontal cortex, whereas left prefrontal activity increased for featural processing. These patterns of activity were not related to differences in performance between the two tasks. The results indicate that the processing of facial features is distinct from the processing of second-order relations in faces, and that these functions are mediated by separate and lateralized networks involving the right fusiform gyrus, although the FFA as defined from a localizer scan is not differentially involved.

  9. Application and optimization of input parameter spaces in mass flow modelling: a case study with r.randomwalk and r.ranger

    NASA Astrophysics Data System (ADS)

    Krenn, Julia; Zangerl, Christian; Mergili, Martin

    2017-04-01

    r.randomwalk is a GIS-based, multi-functional, conceptual open source model application for forward and backward analyses of the propagation of mass flows. It relies on a set of empirically derived, uncertain input parameters. In contrast to many other tools, r.randomwalk accepts input parameter ranges (or, in case of two or more parameters, spaces) in order to directly account for these uncertainties. Parameter spaces represent a possibility to withdraw from discrete input values which in most cases are likely to be off target. r.randomwalk automatically performs multiple calculations with various parameter combinations in a given parameter space, resulting in the impact indicator index (III) which denotes the fraction of parameter value combinations predicting an impact on a given pixel. Still, there is a need to constrain the parameter space used for a certain process type or magnitude prior to performing forward calculations. This can be done by optimizing the parameter space in terms of bringing the model results in line with well-documented past events. As most existing parameter optimization algorithms are designed for discrete values rather than for ranges or spaces, the necessity for a new and innovative technique arises. The present study aims at developing such a technique and at applying it to derive guiding parameter spaces for the forward calculation of rock avalanches through back-calculation of multiple events. In order to automatize the work flow we have designed r.ranger, an optimization and sensitivity analysis tool for parameter spaces which can be directly coupled to r.randomwalk. With r.ranger we apply a nested approach where the total value range of each parameter is divided into various levels of subranges. All possible combinations of subranges of all parameters are tested for the performance of the associated pattern of III. Performance indicators are the area under the ROC curve (AUROC) and the factor of conservativeness (FoC). This strategy is best demonstrated for two input parameters, but can be extended arbitrarily. We use a set of small rock avalanches from western Austria, and some larger ones from Canada and New Zealand, to optimize the basal friction coefficient and the mass-to-drag ratio of the two-parameter friction model implemented with r.randomwalk. Thereby we repeat the optimization procedure with conservative and non-conservative assumptions of a set of complementary parameters and with different raster cell sizes. Our preliminary results indicate that the model performance in terms of AUROC achieved with broad parameter spaces is hardly surpassed by the performance achieved with narrow parameter spaces. However, broad spaces may result in very conservative or very non-conservative predictions. Therefore, guiding parameter spaces have to be (i) broad enough to avoid the risk of being off target; and (ii) narrow enough to ensure a reasonable level of conservativeness of the results. The next steps will consist in (i) extending the study to other types of mass flow processes in order to support forward calculations using r.randomwalk; and (ii) in applying the same strategy to the more complex, dynamic model r.avaflow.

  10. An algorithm to generate input data from meteorological and space shuttle observations to validate a CH4-CO model

    NASA Technical Reports Server (NTRS)

    Peters, L. K.; Yamanis, J.

    1981-01-01

    Objective procedures to analyze data from meteorological and space shuttle observations to validate a three dimensional model were investigated. The transport and chemistry of carbon monoxide and methane in the troposphere were studied. Four aspects were examined: (1) detailed evaluation of the variational calculus procedure, with the equation of continuity as a strong constraint, for adjustment of global tropospheric wind fields; (2) reduction of the National Meteorological Center (NMC) data tapes for data input to the OSTA-1/MAPS Experiment; (3) interpolation of the NMC Data for input to the CH4-CO model; and (4) temporal and spatial interpolation procedures of the CO measurements from the OSTA-1/MAPS Experiment to generate usable contours of the data.

  11. Stability testing and analysis of a PMAD dc test bed for the Space Station Freedom

    NASA Technical Reports Server (NTRS)

    Button, Robert M.; Brush, Andrew S.

    1992-01-01

    The Power Management and Distribution (PMAD) dc Test Bed at the NASA Lewis Research Center is introduced. Its usefulness to the Space Station Freedom Electrical Power (EPS) development and design are discussed in context of verifying system stability. Stability criteria developed by Middlebrook and Cuk are discussed as they apply to constant power dc to dc converters exhibiting negative input impedance at low frequencies. The utility-type Secondary Subsystem is presented and each component is described. The instrumentation used to measure input and output impedance under load is defined. Test results obtained from input and output impedance measurements of test bed components are presented. It is shown that the PMAD dc Test Bed Secondary Subsystem meets the Middlebrook stability criterion for certain loading conditions.

  12. Stability Testing and Analysis of a PMAD DC Test Bed for the Space Station Freedom

    NASA Technical Reports Server (NTRS)

    Button, Robert M.; Brush, Andrew S.

    1992-01-01

    The Power Management and Distribution (PMAD) DC Test Bed at the NASA Lewis Research Center is introduced. Its usefulness to the Space Station Freedom Electrical Power (EPS) development and design are discussed in context of verifying system stability. Stability criteria developed by Middlebrook and Cuk are discussed as they apply to constant power DC to DC converters exhibiting negative input impedance at low frequencies. The utility-type Secondary Subsystem is presented and each component is described. The instrumentation used to measure input and output impedance under load is defined. Test results obtained from input and output impedance measurements of test bed components are presented. It is shown that the PMAD DC Test Bed Secondary Subsystem meets the Middlebrook stability criterion for certain loading conditions.

  13. Method and apparatus for automatic control of a humanoid robot

    NASA Technical Reports Server (NTRS)

    Abdallah, Muhammad E (Inventor); Platt, Robert (Inventor); Wampler, II, Charles W. (Inventor); Sanders, Adam M (Inventor); Reiland, Matthew J (Inventor)

    2013-01-01

    A robotic system includes a humanoid robot having a plurality of joints adapted for force control with respect to an object acted upon by the robot, a graphical user interface (GUI) for receiving an input signal from a user, and a controller. The GUI provides the user with intuitive programming access to the controller. The controller controls the joints using an impedance-based control framework, which provides object level, end-effector level, and/or joint space-level control of the robot in response to the input signal. A method for controlling the robotic system includes receiving the input signal via the GUI, e.g., a desired force, and then processing the input signal using a host machine to control the joints via an impedance-based control framework. The framework provides object level, end-effector level, and/or joint space-level control of the robot, and allows for functional-based GUI to simplify implementation of a myriad of operating modes.

  14. ACOSS Three (Active Control of Space Structures). Phase I.

    DTIC Science & Technology

    1980-05-01

    their assorted pitfalls, programs such as NASTRAN, SPAR, ASTRO , etc., are never-the-less the primary tools for generating dynamical models of...proofs and additional details, see Ref [*] Consider the system described in state-space form by: Dynamics: X = FX + Gu Sensors: y = HX = (F +GCH)X (1...input u and output y = Fx + Gu (6) y = Hx+Du (7) The input-output transfer function is given by y = (H(sI- F)-1G +D)u (8) or y(s) _ 1 N u(s) A(s) E

  15. Adaptive non-predictor control of lower triangular uncertain nonlinear systems with an unknown time-varying delay in the input

    NASA Astrophysics Data System (ADS)

    Koo, Min-Sung; Choi, Ho-Lim

    2018-01-01

    In this paper, we consider a control problem for a class of uncertain nonlinear systems in which there exists an unknown time-varying delay in the input and lower triangular nonlinearities. Usually, in the existing results, input delays have been coupled with feedforward (or upper triangular) nonlinearities; in other words, the combination of lower triangular nonlinearities and input delay has been rare. Motivated by the existing controller for input-delayed chain of integrators with nonlinearity, we show that the control of input-delayed nonlinear systems with two particular types of lower triangular nonlinearities can be done. As a control solution, we propose a newly designed feedback controller whose main features are its dynamic gain and non-predictor approach. Three examples are given for illustration.

  16. Fuzzy Neuron: Method and Hardware Realization

    NASA Technical Reports Server (NTRS)

    Krasowski, Michael J.; Prokop, Norman F.

    2014-01-01

    This innovation represents a method by which single-to-multi-input, single-to-many-output system transfer functions can be estimated from input/output data sets. This innovation can be run in the background while a system is operating under other means (e.g., through human operator effort), or may be utilized offline using data sets created from observations of the estimated system. It utilizes a set of fuzzy membership functions spanning the input space for each input variable. Linear combiners associated with combinations of input membership functions are used to create the output(s) of the estimator. Coefficients are adjusted online through the use of learning algorithms.

  17. User's manual for master: Modeling of aerodynamic surfaces by 3-dimensional explicit representation. [input to three dimensional computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Gibson, S. G.

    1983-01-01

    A system of computer programs was developed to model general three dimensional surfaces. Surfaces are modeled as sets of parametric bicubic patches. There are also capabilities to transform coordinates, to compute mesh/surface intersection normals, and to format input data for a transonic potential flow analysis. A graphical display of surface models and intersection normals is available. There are additional capabilities to regulate point spacing on input curves and to compute surface/surface intersection curves. Input and output data formats are described; detailed suggestions are given for user input. Instructions for execution are given, and examples are shown.

  18. Case Assignment in Typically Developing English-Speaking Children: A Paired Priming Study

    ERIC Educational Resources Information Center

    Wisman Weil, Lisa Marie

    2013-01-01

    This study utilized a paired priming paradigm to examine the influence of input features on case assignment in typically developing English-speaking children. The Input Ambiguity Hypothesis (Pelham, 2011) was experimentally tested to help explain why children produce subject pronoun case errors. Analyses of third singular "-s" marking on…

  19. Input-Based Grammar Pedagogy: A Comparison of Two Possibilities

    ERIC Educational Resources Information Center

    Marsden, Emma

    2005-01-01

    This article presents arguments for using listening and reading activities as an option for techniques in grammar pedagogy. It describes two possible approaches: Processing Instruction (PI) and Enriched Input (EI), and examples of their key features are included in the appendices. The article goes on to report on a classroom based quasi-experiment…

  20. Electrosensory Midbrain Neurons Display Feature Invariant Responses to Natural Communication Stimuli.

    PubMed

    Aumentado-Armstrong, Tristan; Metzen, Michael G; Sproule, Michael K J; Chacron, Maurice J

    2015-10-01

    Neurons that respond selectively but in an invariant manner to a given feature of natural stimuli have been observed across species and systems. Such responses emerge in higher brain areas, thereby suggesting that they occur by integrating afferent input. However, the mechanisms by which such integration occurs are poorly understood. Here we show that midbrain electrosensory neurons can respond selectively and in an invariant manner to heterogeneity in behaviorally relevant stimulus waveforms. Such invariant responses were not seen in hindbrain electrosensory neurons providing afferent input to these midbrain neurons, suggesting that response invariance results from nonlinear integration of such input. To test this hypothesis, we built a model based on the Hodgkin-Huxley formalism that received realistic afferent input. We found that multiple combinations of parameter values could give rise to invariant responses matching those seen experimentally. Our model thus shows that there are multiple solutions towards achieving invariant responses and reveals how subthreshold membrane conductances help promote robust and invariant firing in response to heterogeneous stimulus waveforms associated with behaviorally relevant stimuli. We discuss the implications of our findings for the electrosensory and other systems.

  1. Computational intelligence models to predict porosity of tablets using minimum features

    PubMed Central

    Khalid, Mohammad Hassan; Kazemi, Pezhman; Perez-Gandarillas, Lucia; Michrafy, Abderrahim; Szlęk, Jakub; Jachowicz, Renata; Mendyk, Aleksander

    2017-01-01

    The effects of different formulations and manufacturing process conditions on the physical properties of a solid dosage form are of importance to the pharmaceutical industry. It is vital to have in-depth understanding of the material properties and governing parameters of its processes in response to different formulations. Understanding the mentioned aspects will allow tighter control of the process, leading to implementation of quality-by-design (QbD) practices. Computational intelligence (CI) offers an opportunity to create empirical models that can be used to describe the system and predict future outcomes in silico. CI models can help explore the behavior of input parameters, unlocking deeper understanding of the system. This research endeavor presents CI models to predict the porosity of tablets created by roll-compacted binary mixtures, which were milled and compacted under systematically varying conditions. CI models were created using tree-based methods, artificial neural networks (ANNs), and symbolic regression trained on an experimental data set and screened using root-mean-square error (RMSE) scores. The experimental data were composed of proportion of microcrystalline cellulose (MCC) (in percentage), granule size fraction (in micrometers), and die compaction force (in kilonewtons) as inputs and porosity as an output. The resulting models show impressive generalization ability, with ANNs (normalized root-mean-square error [NRMSE] =1%) and symbolic regression (NRMSE =4%) as the best-performing methods, also exhibiting reliable predictive behavior when presented with a challenging external validation data set (best achieved symbolic regression: NRMSE =3%). Symbolic regression demonstrates the transition from the black box modeling paradigm to more transparent predictive models. Predictive performance and feature selection behavior of CI models hints at the most important variables within this factor space. PMID:28138223

  2. Computational intelligence models to predict porosity of tablets using minimum features.

    PubMed

    Khalid, Mohammad Hassan; Kazemi, Pezhman; Perez-Gandarillas, Lucia; Michrafy, Abderrahim; Szlęk, Jakub; Jachowicz, Renata; Mendyk, Aleksander

    2017-01-01

    The effects of different formulations and manufacturing process conditions on the physical properties of a solid dosage form are of importance to the pharmaceutical industry. It is vital to have in-depth understanding of the material properties and governing parameters of its processes in response to different formulations. Understanding the mentioned aspects will allow tighter control of the process, leading to implementation of quality-by-design (QbD) practices. Computational intelligence (CI) offers an opportunity to create empirical models that can be used to describe the system and predict future outcomes in silico. CI models can help explore the behavior of input parameters, unlocking deeper understanding of the system. This research endeavor presents CI models to predict the porosity of tablets created by roll-compacted binary mixtures, which were milled and compacted under systematically varying conditions. CI models were created using tree-based methods, artificial neural networks (ANNs), and symbolic regression trained on an experimental data set and screened using root-mean-square error (RMSE) scores. The experimental data were composed of proportion of microcrystalline cellulose (MCC) (in percentage), granule size fraction (in micrometers), and die compaction force (in kilonewtons) as inputs and porosity as an output. The resulting models show impressive generalization ability, with ANNs (normalized root-mean-square error [NRMSE] =1%) and symbolic regression (NRMSE =4%) as the best-performing methods, also exhibiting reliable predictive behavior when presented with a challenging external validation data set (best achieved symbolic regression: NRMSE =3%). Symbolic regression demonstrates the transition from the black box modeling paradigm to more transparent predictive models. Predictive performance and feature selection behavior of CI models hints at the most important variables within this factor space.

  3. Statistical Methods in Ai: Rare Event Learning Using Associative Rules and Higher-Order Statistics

    NASA Astrophysics Data System (ADS)

    Iyer, V.; Shetty, S.; Iyengar, S. S.

    2015-07-01

    Rare event learning has not been actively researched since lately due to the unavailability of algorithms which deal with big samples. The research addresses spatio-temporal streams from multi-resolution sensors to find actionable items from a perspective of real-time algorithms. This computing framework is independent of the number of input samples, application domain, labelled or label-less streams. A sampling overlap algorithm such as Brooks-Iyengar is used for dealing with noisy sensor streams. We extend the existing noise pre-processing algorithms using Data-Cleaning trees. Pre-processing using ensemble of trees using bagging and multi-target regression showed robustness to random noise and missing data. As spatio-temporal streams are highly statistically correlated, we prove that a temporal window based sampling from sensor data streams converges after n samples using Hoeffding bounds. Which can be used for fast prediction of new samples in real-time. The Data-cleaning tree model uses a nonparametric node splitting technique, which can be learned in an iterative way which scales linearly in memory consumption for any size input stream. The improved task based ensemble extraction is compared with non-linear computation models using various SVM kernels for speed and accuracy. We show using empirical datasets the explicit rule learning computation is linear in time and is only dependent on the number of leafs present in the tree ensemble. The use of unpruned trees (t) in our proposed ensemble always yields minimum number (m) of leafs keeping pre-processing computation to n × t log m compared to N2 for Gram Matrix. We also show that the task based feature induction yields higher Qualify of Data (QoD) in the feature space compared to kernel methods using Gram Matrix.

  4. A comparison of supervised classification methods for the prediction of substrate type using multibeam acoustic and legacy grain-size data.

    PubMed

    Stephens, David; Diesing, Markus

    2014-01-01

    Detailed seabed substrate maps are increasingly in demand for effective planning and management of marine ecosystems and resources. It has become common to use remotely sensed multibeam echosounder data in the form of bathymetry and acoustic backscatter in conjunction with ground-truth sampling data to inform the mapping of seabed substrates. Whilst, until recently, such data sets have typically been classified by expert interpretation, it is now obvious that more objective, faster and repeatable methods of seabed classification are required. This study compares the performances of a range of supervised classification techniques for predicting substrate type from multibeam echosounder data. The study area is located in the North Sea, off the north-east coast of England. A total of 258 ground-truth samples were classified into four substrate classes. Multibeam bathymetry and backscatter data, and a range of secondary features derived from these datasets were used in this study. Six supervised classification techniques were tested: Classification Trees, Support Vector Machines, k-Nearest Neighbour, Neural Networks, Random Forest and Naive Bayes. Each classifier was trained multiple times using different input features, including i) the two primary features of bathymetry and backscatter, ii) a subset of the features chosen by a feature selection process and iii) all of the input features. The predictive performances of the models were validated using a separate test set of ground-truth samples. The statistical significance of model performances relative to a simple baseline model (Nearest Neighbour predictions on bathymetry and backscatter) were tested to assess the benefits of using more sophisticated approaches. The best performing models were tree based methods and Naive Bayes which achieved accuracies of around 0.8 and kappa coefficients of up to 0.5 on the test set. The models that used all input features didn't generally perform well, highlighting the need for some means of feature selection.

  5. DOE-2 sample run book: Version 2.1E

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Winkelmann, F.C.; Birdsall, B.E.; Buhl, W.F.

    1993-11-01

    The DOE-2 Sample Run Book shows inputs and outputs for a variety of building and system types. The samples start with a simple structure and continue to a high-rise office building, a medical building, three small office buildings, a bar/lounge, a single-family residence, a small office building with daylighting, a single family residence with an attached sunspace, a ``parameterized`` building using input macros, and a metric input/output example. All of the samples use Chicago TRY weather. The main purpose of the Sample Run Book is instructional. It shows the relationship of LOADS-SYSTEMS-PLANT-ECONOMICS inputs, displays various input styles, and illustrates manymore » of the basic and advanced features of the program. Many of the sample runs are preceded by a sketch of the building showing its general appearance and the zoning used in the input. In some cases we also show a 3-D rendering of the building as produced by the program DrawBDL. Descriptive material has been added as comments in the input itself. We find that a number of users have loaded these samples onto their editing systems and use them as ``templates`` for creating new inputs. Another way of using them would be to store various portions as files that can be read into the input using the {number_sign}{number_sign} include command, which is part of the Input Macro feature introduced in version DOE-2.lD. Note that the energy rate structures here are the same as in the DOE-2.lD samples, but have been rewritten using the new DOE-2.lE commands and keywords for ECONOMICS. The samples contained in this report are the same as those found on the DOE-2 release files. However, the output numbers that appear here may differ slightly from those obtained from the release files. The output on the release files can be used as a check set to compare results on your computer.« less

  6. Retinal Origin of Direction Selectivity in the Superior Colliculus

    PubMed Central

    Shi, Xuefeng; Barchini, Jad; Ledesma, Hector Acaron; Koren, David; Jin, Yanjiao; Liu, Xiaorong; Wei, Wei; Cang, Jianhua

    2017-01-01

    Detecting visual features in the environment such as motion direction is crucial for survival. The circuit mechanisms that give rise to direction selectivity in a major visual center, the superior colliculus (SC), are entirely unknown. Here, we optogenetically isolate the retinal inputs that individual direction-selective SC neurons receive and find that they are already selective as a result of precisely converging inputs from similarly-tuned retinal ganglion cells. The direction selective retinal input is linearly amplified by the intracollicular circuits without changing its preferred direction or level of selectivity. Finally, using 2-photon calcium imaging, we show that SC direction selectivity is dramatically reduced in transgenic mice that have decreased retinal selectivity. Together, our studies demonstrate a retinal origin of direction selectivity in the SC, and reveal a central visual deficit as a consequence of altered feature selectivity in the retina. PMID:28192394

  7. Combining convolutional neural networks and Hough Transform for classification of images containing lines

    NASA Astrophysics Data System (ADS)

    Sheshkus, Alexander; Limonova, Elena; Nikolaev, Dmitry; Krivtsov, Valeriy

    2017-03-01

    In this paper, we propose an expansion of convolutional neural network (CNN) input features based on Hough Transform. We perform morphological contrasting of source image followed by Hough Transform, and then use it as input for some convolutional filters. Thus, CNNs computational complexity and the number of units are not affected. Morphological contrasting and Hough Transform are the only additional computational expenses of introduced CNN input features expansion. Proposed approach was demonstrated on the example of CNN with very simple structure. We considered two image recognition problems, that were object classification on CIFAR-10 and printed character recognition on private dataset with symbols taken from Russian passports. Our approach allowed to reach noticeable accuracy improvement without taking much computational effort, which can be extremely important in industrial recognition systems or difficult problems utilising CNNs, like pressure ridge analysis and classification.

  8. Space Shuttle astrodynamical constants

    NASA Technical Reports Server (NTRS)

    Cockrell, B. F.; Williamson, B.

    1978-01-01

    Basic space shuttle astrodynamic constants are reported for use in mission planning and construction of ground and onboard software input loads. The data included here are provided to facilitate the use of consistent numerical values throughout the project.

  9. Input and Intake in Language Acquisition

    ERIC Educational Resources Information Center

    Gagliardi, Ann C.

    2012-01-01

    This dissertation presents an approach for a productive way forward in the study of language acquisition, sealing the rift between claims of an innate linguistic hypothesis space and powerful domain general statistical inference. This approach breaks language acquisition into its component parts, distinguishing the input in the environment from…

  10. ERGONOMICS ABSTRACTS 48983-49619.

    ERIC Educational Resources Information Center

    Ministry of Technology, London (England). Warren Spring Lab.

    THE LITERATURE OF ERGONOMICS, OR BIOTECHNOLOGY, IS CLASSIFIED INTO 15 AREAS--METHODS, SYSTEMS OF MEN AND MACHINES, VISUAL AND AUDITORY AND OTHER INPUTS AND PROCESSES, INPUT CHANNELS, BODY MEASUREMENTS, DESIGN OF CONTROLS AND INTEGRATION WITH DISPLAYS, LAYOUT OF PANELS AND CONSOLES, DESIGN OF WORK SPACE, CLOTHING AND PERSONAL EQUIPMENT, SPECIAL…

  11. The Next Generation of Ground Operations Command and Control; Scripting in C no. and Visual Basic

    NASA Technical Reports Server (NTRS)

    Ritter, George; Pedoto, Ramon

    2010-01-01

    Scripting languages have become a common method for implementing command and control solutions in space ground operations. The Systems Test and Operations Language (STOL), the Huntsville Operations Support Center (HOSC) Scripting Language Processor (SLP), and the Spacecraft Control Language (SCL) offer script-commands that wrap tedious operations tasks into single calls. Since script-commands are interpreted, they also offer a certain amount of hands-on control that is highly valued in space ground operations. Although compiled programs seem to be unsuited for interactive user control and are more complex to develop, Marshall Space flight Center (MSFC) has developed a product called the Enhanced and Redesign Scripting (ERS) that makes use of the graphical and logical richness of a programming language while offering the hands-on and ease of control of a scripting language. ERS is currently used by the International Space Station (ISS) Payload Operations Integration Center (POIC) Cadre team members. ERS integrates spacecraft command mnemonics, telemetry measurements, and command and telemetry control procedures into a standard programming language, while making use of Microsoft's Visual Studio for developing Visual Basic (VB) or C# ground operations procedures. ERS also allows for script-style user control during procedure execution using a robust graphical user input and output feature. The availability of VB and C# programmers, and the richness of the languages and their development environment, has allowed ERS to lower our "script" development time and maintenance costs at the Marshall POIC.

  12. A Review of Criteria for Outdoor Classroom in Selected Tertiary Educational Institutions in Kuala Lumpur

    NASA Astrophysics Data System (ADS)

    Maheran, Y.; Fadzidah, A.; Nur Fadhilah, R.; Farha, S.

    2017-12-01

    A proper design outdoor environment in higher institutions contributes to the students’ learning performances and produce better learning outcomes. Campus surrounding has the potential to provide an informal outdoor learning environment, especially when it has the existing physical element, like open spaces and natural features, that may support the learning process. However, scholarly discourses on environmental aspects in tertiary education have minimal environmental inputs to fulfill students’ needs for outdoor exposure. Universities have always emphasized on traditional instructional methods in classroom settings, without concerning the importance of outdoor classroom towards students’ learning needs. Moreover, the inconvenience and discomfort outdoor surrounding in campus environment offers a minimal opportunity for students to study outside the classroom, and students eventually do not favor to utilize the spaces because no learning facility is provided. Hence, the objective of this study is to identify the appropriate criteria of outdoor areas that could be converted to be outdoor classrooms in tertiary institutions. This paper presents a review of scholars’ work in regards to the characteristics of the outdoor classrooms that could be designed as part of contemporary effective learning space, for the development of students’ learning performances. The information gathered from this study will become useful knowledge in promoting effective outdoor classroom and create successful outdoor learning space in landscape campus design. It I hoped that the finding of this study could provide guidelines on how outdoor classrooms should be designed to improve students’ academic achievement.

  13. Multi-Feature Based Information Extraction of Urban Green Space Along Road

    NASA Astrophysics Data System (ADS)

    Zhao, H. H.; Guan, H. Y.

    2018-04-01

    Green space along road of QuickBird image was studied in this paper based on multi-feature-marks in frequency domain. The magnitude spectrum of green along road was analysed, and the recognition marks of the tonal feature, contour feature and the road were built up by the distribution of frequency channels. Gabor filters in frequency domain were used to detect the features based on the recognition marks built up. The detected features were combined as the multi-feature-marks, and watershed based image segmentation were conducted to complete the extraction of green space along roads. The segmentation results were evaluated by Fmeasure with P = 0.7605, R = 0.7639, F = 0.7622.

  14. A Semi-Structured MODFLOW-USG Model to Evaluate Local Water Sources to Wells for Decision Support.

    PubMed

    Feinstein, Daniel T; Fienen, Michael N; Reeves, Howard W; Langevin, Christian D

    2016-07-01

    In order to better represent the configuration of the stream network and simulate local groundwater-surface water interactions, a version of MODFLOW with refined spacing in the topmost layer was applied to a Lake Michigan Basin (LMB) regional groundwater-flow model developed by the U.S. Geological. Regional MODFLOW models commonly use coarse grids over large areas; this coarse spacing precludes model application to local management issues (e.g., surface-water depletion by wells) without recourse to labor-intensive inset models. Implementation of an unstructured formulation within the MODFLOW framework (MODFLOW-USG) allows application of regional models to address local problems. A "semi-structured" approach (uniform lateral spacing within layers, different lateral spacing among layers) was tested using the LMB regional model. The parent 20-layer model with uniform 5000-foot (1524-m) lateral spacing was converted to 4 layers with 500-foot (152-m) spacing in the top glacial (Quaternary) layer, where surface water features are located, overlying coarser resolution layers representing deeper deposits. This semi-structured version of the LMB model reproduces regional flow conditions, whereas the finer resolution in the top layer improves the accuracy of the simulated response of surface water to shallow wells. One application of the semi-structured LMB model is to provide statistical measures of the correlation between modeled inputs and the simulated amount of water that wells derive from local surface water. The relations identified in this paper serve as the basis for metamodels to predict (with uncertainty) surface-water depletion in response to shallow pumping within and potentially beyond the modeled area, see Fienen et al. (2015a). Published 2016. This article is a U.S. Government work and is in the public domain in the USA.

  15. A semi-structured MODFLOW-USG model to evaluate local water sources to wells for decision support

    USGS Publications Warehouse

    Feinstein, Daniel T.; Fienen, Michael N.; Reeves, Howard W.; Langevin, Christian D.

    2016-01-01

    In order to better represent the configuration of the stream network and simulate local groundwater-surface water interactions, a version of MODFLOW with refined spacing in the topmost layer was applied to a Lake Michigan Basin (LMB) regional groundwater-flow model developed by the U.S. Geological. Regional MODFLOW models commonly use coarse grids over large areas; this coarse spacing precludes model application to local management issues (e.g., surface-water depletion by wells) without recourse to labor-intensive inset models. Implementation of an unstructured formulation within the MODFLOW framework (MODFLOW-USG) allows application of regional models to address local problems. A “semi-structured” approach (uniform lateral spacing within layers, different lateral spacing among layers) was tested using the LMB regional model. The parent 20-layer model with uniform 5000-foot (1524-m) lateral spacing was converted to 4 layers with 500-foot (152-m) spacing in the top glacial (Quaternary) layer, where surface water features are located, overlying coarser resolution layers representing deeper deposits. This semi-structured version of the LMB model reproduces regional flow conditions, whereas the finer resolution in the top layer improves the accuracy of the simulated response of surface water to shallow wells. One application of the semi-structured LMB model is to provide statistical measures of the correlation between modeled inputs and the simulated amount of water that wells derive from local surface water. The relations identified in this paper serve as the basis for metamodels to predict (with uncertainty) surface-water depletion in response to shallow pumping within and potentially beyond the modeled area, see Fienen et al. (2015a).

  16. Preserving information in neural transmission.

    PubMed

    Sincich, Lawrence C; Horton, Jonathan C; Sharpee, Tatyana O

    2009-05-13

    Along most neural pathways, the spike trains transmitted from one neuron to the next are altered. In the process, neurons can either achieve a more efficient stimulus representation, or extract some biologically important stimulus parameter, or succeed at both. We recorded the inputs from single retinal ganglion cells and the outputs from connected lateral geniculate neurons in the macaque to examine how visual signals are relayed from retina to cortex. We found that geniculate neurons re-encoded multiple temporal stimulus features to yield output spikes that carried more information about stimuli than was available in each input spike. The coding transformation of some relay neurons occurred with no decrement in information rate, despite output spike rates that averaged half the input spike rates. This preservation of transmitted information was achieved by the short-term summation of inputs that geniculate neurons require to spike. A reduced model of the retinal and geniculate visual responses, based on two stimulus features and their associated nonlinearities, could account for >85% of the total information available in the spike trains and the preserved information transmission. These results apply to neurons operating on a single time-varying input, suggesting that synaptic temporal integration can alter the temporal receptive field properties to create a more efficient representation of visual signals in the thalamus than the retina.

  17. A Neural-Dynamic Architecture for Concurrent Estimation of Object Pose and Identity

    PubMed Central

    Lomp, Oliver; Faubel, Christian; Schöner, Gregor

    2017-01-01

    Handling objects or interacting with a human user about objects on a shared tabletop requires that objects be identified after learning from a small number of views and that object pose be estimated. We present a neurally inspired architecture that learns object instances by storing features extracted from a single view of each object. Input features are color and edge histograms from a localized area that is updated during processing. The system finds the best-matching view for the object in a novel input image while concurrently estimating the object’s pose, aligning the learned view with current input. The system is based on neural dynamics, computationally operating in real time, and can handle dynamic scenes directly off live video input. In a scenario with 30 everyday objects, the system achieves recognition rates of 87.2% from a single training view for each object, while also estimating pose quite precisely. We further demonstrate that the system can track moving objects, and that it can segment the visual array, selecting and recognizing one object while suppressing input from another known object in the immediate vicinity. Evaluation on the COIL-100 dataset, in which objects are depicted from different viewing angles, revealed recognition rates of 91.1% on the first 30 objects, each learned from four training views. PMID:28503145

  18. Enriching Triangle Mesh Animations with Physically Based Simulation.

    PubMed

    Li, Yijing; Xu, Hongyi; Barbic, Jernej

    2017-10-01

    We present a system to combine arbitrary triangle mesh animations with physically based Finite Element Method (FEM) simulation, enabling control over the combination both in space and time. The input is a triangle mesh animation obtained using any method, such as keyframed animation, character rigging, 3D scanning, or geometric shape modeling. The input may be non-physical, crude or even incomplete. The user provides weights, specified using a minimal user interface, for how much physically based simulation should be allowed to modify the animation in any region of the model, and in time. Our system then computes a physically-based animation that is constrained to the input animation to the amount prescribed by these weights. This permits smoothly turning physics on and off over space and time, making it possible for the output to strictly follow the input, to evolve purely based on physically based simulation, and anything in between. Achieving such results requires a careful combination of several system components. We propose and analyze these components, including proper automatic creation of simulation meshes (even for non-manifold and self-colliding undeformed triangle meshes), converting triangle mesh animations into animations of the simulation mesh, and resolving collisions and self-collisions while following the input.

  19. Field Research Facility Data Integration Framework Data Management Plan: Survey Lines Dataset

    DTIC Science & Technology

    2016-08-01

    CHL and its District partners. The beach morphology surveys on which this report focuses provide quantitative measures of the dynamic nature of...topography • volume change 1.4 Data description The morphology surveys are conducted over a series of 26 shore- perpendicular profile lines spaced 50...dataset input data and products. Table 1. FRF survey lines dataset input data and products. Input Data FDIF Product Description ASCII LARC survey text

  20. A Novel Approach for Efficient Pharmacophore-based Virtual Screening: Method and Applications

    PubMed Central

    Dror, Oranit; Schneidman-Duhovny, Dina; Inbar, Yuval; Nussinov, Ruth; Wolfson, Haim J.

    2009-01-01

    Virtual screening is emerging as a productive and cost-effective technology in rational drug design for the identification of novel lead compounds. An important model for virtual screening is the pharmacophore. Pharmacophore is the spatial configuration of essential features that enable a ligand molecule to interact with a specific target receptor. In the absence of a known receptor structure, a pharmacophore can be identified from a set of ligands that have been observed to interact with the target receptor. Here, we present a novel computational method for pharmacophore detection and virtual screening. The pharmacophore detection module is able to: (i) align multiple flexible ligands in a deterministic manner without exhaustive enumeration of the conformational space, (ii) detect subsets of input ligands that may bind to different binding sites or have different binding modes, (iii) address cases where the input ligands have different affinities by defining weighted pharmacophores based on the number of ligands that share them, and (iv) automatically select the most appropriate pharmacophore candidates for virtual screening. The algorithm is highly efficient, allowing a fast exploration of the chemical space by virtual screening of huge compound databases. The performance of PharmaGist was successfully evaluated on a commonly used dataset of G-Protein Coupled Receptor alpha1A. Additionally, a large-scale evaluation using the DUD (directory of useful decoys) dataset was performed. DUD contains 2950 active ligands for 40 different receptors, with 36 decoy compounds for each active ligand. PharmaGist enrichment rates are comparable with other state-of-the-art tools for virtual screening. Availability The software is available for download. A user-friendly web interface for pharmacophore detection is available at http://bioinfo3d.cs.tau.ac.il/PharmaGist. PMID:19803502

  1. Fold-change detection and scalar symmetry of sensory input fields.

    PubMed

    Shoval, Oren; Goentoro, Lea; Hart, Yuval; Mayo, Avi; Sontag, Eduardo; Alon, Uri

    2010-09-07

    Recent studies suggest that certain cellular sensory systems display fold-change detection (FCD): a response whose entire shape, including amplitude and duration, depends only on fold changes in input and not on absolute levels. Thus, a step change in input from, for example, level 1 to 2 gives precisely the same dynamical output as a step from level 2 to 4, because the steps have the same fold change. We ask what the benefit of FCD is and show that FCD is necessary and sufficient for sensory search to be independent of multiplying the input field by a scalar. Thus, the FCD search pattern depends only on the spatial profile of the input and not on its amplitude. Such scalar symmetry occurs in a wide range of sensory inputs, such as source strength multiplying diffusing/convecting chemical fields sensed in chemotaxis, ambient light multiplying the contrast field in vision, and protein concentrations multiplying the output in cellular signaling systems. Furthermore, we show that FCD entails two features found across sensory systems, exact adaptation and Weber's law, but that these two features are not sufficient for FCD. Finally, we present a wide class of mechanisms that have FCD, including certain nonlinear feedback and feed-forward loops. We find that bacterial chemotaxis displays feedback within the present class and hence, is expected to show FCD. This can explain experiments in which chemotaxis searches are insensitive to attractant source levels. This study, thus, suggests a connection between properties of biological sensory systems and scalar symmetry stemming from physical properties of their input fields.

  2. Active Learning to Understand Infectious Disease Models and Improve Policy Making

    PubMed Central

    Vladislavleva, Ekaterina; Broeckhove, Jan; Beutels, Philippe; Hens, Niel

    2014-01-01

    Modeling plays a major role in policy making, especially for infectious disease interventions but such models can be complex and computationally intensive. A more systematic exploration is needed to gain a thorough systems understanding. We present an active learning approach based on machine learning techniques as iterative surrogate modeling and model-guided experimentation to systematically analyze both common and edge manifestations of complex model runs. Symbolic regression is used for nonlinear response surface modeling with automatic feature selection. First, we illustrate our approach using an individual-based model for influenza vaccination. After optimizing the parameter space, we observe an inverse relationship between vaccination coverage and cumulative attack rate reinforced by herd immunity. Second, we demonstrate the use of surrogate modeling techniques on input-response data from a deterministic dynamic model, which was designed to explore the cost-effectiveness of varicella-zoster virus vaccination. We use symbolic regression to handle high dimensionality and correlated inputs and to identify the most influential variables. Provided insight is used to focus research, reduce dimensionality and decrease decision uncertainty. We conclude that active learning is needed to fully understand complex systems behavior. Surrogate models can be readily explored at no computational expense, and can also be used as emulator to improve rapid policy making in various settings. PMID:24743387

  3. Active learning to understand infectious disease models and improve policy making.

    PubMed

    Willem, Lander; Stijven, Sean; Vladislavleva, Ekaterina; Broeckhove, Jan; Beutels, Philippe; Hens, Niel

    2014-04-01

    Modeling plays a major role in policy making, especially for infectious disease interventions but such models can be complex and computationally intensive. A more systematic exploration is needed to gain a thorough systems understanding. We present an active learning approach based on machine learning techniques as iterative surrogate modeling and model-guided experimentation to systematically analyze both common and edge manifestations of complex model runs. Symbolic regression is used for nonlinear response surface modeling with automatic feature selection. First, we illustrate our approach using an individual-based model for influenza vaccination. After optimizing the parameter space, we observe an inverse relationship between vaccination coverage and cumulative attack rate reinforced by herd immunity. Second, we demonstrate the use of surrogate modeling techniques on input-response data from a deterministic dynamic model, which was designed to explore the cost-effectiveness of varicella-zoster virus vaccination. We use symbolic regression to handle high dimensionality and correlated inputs and to identify the most influential variables. Provided insight is used to focus research, reduce dimensionality and decrease decision uncertainty. We conclude that active learning is needed to fully understand complex systems behavior. Surrogate models can be readily explored at no computational expense, and can also be used as emulator to improve rapid policy making in various settings.

  4. A 1 MW, 100 kV, less than 100 kg space based dc-dc power converter

    NASA Technical Reports Server (NTRS)

    Cooper, J. R.; White, C. W.

    1991-01-01

    A 1 MW dc-dc power converter has been designed which has an input voltage of 5 kV +/-3 percent, an output voltage of 100 kV +/- 0.25 percent, and a run time of 1000 s at full power. The estimated system mass is 83.8 kg, giving a power density of 11.9 kW/kg. The system exceeded the weight goal of 10 kW/kg through the use of innovative components and system concepts. The system volume is approximately 0.1 cu m, and the overall system efficiency is estimated to be 87 percent. Some of the unique system features include a 50-kHz H-bridge inverter using MOS-controlled thyristors as the switching devices, a resonance transformer to step up the voltage, open-cycle cryogenic hydrogen gas cooling, and a nonrigid, inflatable housing which provides on-demand pressurization of the power converter local environment. This system scales very well to higher output powers. The weight of the 10-MW system with the same input and output voltage requirements and overall system configuration is estimated to be 575.3 kg. This gives a power density of 17.4 kW/kg, significantly higher than the 11.9 kW/kg estimated at 1 MW.

  5. Hierarchical neural network model of the visual system determining figure/ground relation

    NASA Astrophysics Data System (ADS)

    Kikuchi, Masayuki

    2017-07-01

    One of the most important functions of the visual perception in the brain is figure/ground interpretation from input images. Figural region in 2D image corresponding to object in 3D space are distinguished from background region extended behind the object. Previously the author proposed a neural network model of figure/ground separation constructed on the standpoint that local geometric features such as curvatures and outer angles at corners are extracted and propagated along input contour in a single layer network (Kikuchi & Akashi, 2001). However, such a processing principle has the defect that signal propagation requires manyiterations despite the fact that actual visual system determines figure/ground relation within the short period (Zhou et al., 2000). In order to attain speed-up for determining figure/ground, this study incorporates hierarchical architecture into the previous model. This study confirmed the effect of the hierarchization as for the computation time by simulation. As the number of layers increased, the required computation time reduced. However, such speed-up effect was saturatedas the layers increased to some extent. This study attempted to explain this saturation effect by the notion of average distance between vertices in the area of complex network, and succeeded to mimic the saturation effect by computer simulation.

  6. A 1 MW, 100 kV, less than 100 kg space based dc-dc power converter

    NASA Astrophysics Data System (ADS)

    Cooper, J. R.; White, C. W.

    A 1 MW dc-dc power converter has been designed which has an input voltage of 5 kV +/-3 percent, an output voltage of 100 kV +/- 0.25 percent, and a run time of 1000 s at full power. The estimated system mass is 83.8 kg, giving a power density of 11.9 kW/kg. The system exceeded the weight goal of 10 kW/kg through the use of innovative components and system concepts. The system volume is approximately 0.1 cu m, and the overall system efficiency is estimated to be 87 percent. Some of the unique system features include a 50-kHz H-bridge inverter using MOS-controlled thyristors as the switching devices, a resonance transformer to step up the voltage, open-cycle cryogenic hydrogen gas cooling, and a nonrigid, inflatable housing which provides on-demand pressurization of the power converter local environment. This system scales very well to higher output powers. The weight of the 10-MW system with the same input and output voltage requirements and overall system configuration is estimated to be 575.3 kg. This gives a power density of 17.4 kW/kg, significantly higher than the 11.9 kW/kg estimated at 1 MW.

  7. Push-Pull Receptive Field Organization and Synaptic Depression: Mechanisms for Reliably Encoding Naturalistic Stimuli in V1

    PubMed Central

    Kremkow, Jens; Perrinet, Laurent U.; Monier, Cyril; Alonso, Jose-Manuel; Aertsen, Ad; Frégnac, Yves; Masson, Guillaume S.

    2016-01-01

    Neurons in the primary visual cortex are known for responding vigorously but with high variability to classical stimuli such as drifting bars or gratings. By contrast, natural scenes are encoded more efficiently by sparse and temporal precise spiking responses. We used a conductance-based model of the visual system in higher mammals to investigate how two specific features of the thalamo-cortical pathway, namely push-pull receptive field organization and fast synaptic depression, can contribute to this contextual reshaping of V1 responses. By comparing cortical dynamics evoked respectively by natural vs. artificial stimuli in a comprehensive parametric space analysis, we demonstrate that the reliability and sparseness of the spiking responses during natural vision is not a mere consequence of the increased bandwidth in the sensory input spectrum. Rather, it results from the combined impacts of fast synaptic depression and push-pull inhibition, the later acting for natural scenes as a form of “effective” feed-forward inhibition as demonstrated in other sensory systems. Thus, the combination of feedforward-like inhibition with fast thalamo-cortical synaptic depression by simple cells receiving a direct structured input from thalamus composes a generic computational mechanism for generating a sparse and reliable encoding of natural sensory events. PMID:27242445

  8. Using features of a Creole language to reconstruct population history and cultural evolution: tracing the English origins of Sranan.

    PubMed

    Sherriah, André C; Devonish, Hubert; Thomas, Ewart A C; Creanza, Nicole

    2018-04-05

    Creole languages are formed in conditions where speakers from distinct languages are brought together without a shared first language, typically under the domination of speakers from one of the languages and particularly in the context of the transatlantic slave trade and European colonialism. One such Creole in Suriname, Sranan, developed around the mid-seventeenth century, primarily out of contact between varieties of English from England, spoken by the dominant group, and multiple West African languages. The vast majority of the basic words in Sranan come from the language of the dominant group, English. Here, we compare linguistic features of modern-day Sranan with those of English as spoken in 313 localities across England. By way of testing proposed hypotheses for the origin of English words in Sranan, we find that 80% of the studied features of Sranan can be explained by similarity to regional dialect features at two distinct input locations within England, a cluster of locations near the port of Bristol and another cluster near Essex in eastern England. Our new hypothesis is supported by the geographical distribution of specific regional dialect features, such as post-vocalic rhoticity and word-initial 'h', and by phylogenetic analysis of these features, which shows evidence favouring input from at least two English dialects in the formation of Sranan. In addition to explicating the dialect features most prominent in the linguistic evolution of Sranan, our historical analyses also provide supporting evidence for two distinct hypotheses about the likely geographical origins of the English speakers whose language was an input to Sranan. The emergence as a likely input to Sranan of the speech forms of a cluster near Bristol is consistent with historical records, indicating that most of the indentured servants going to the Americas between 1654 and 1666 were from Bristol and nearby counties, and that of the cluster near Essex is consistent with documents showing that many of the governors and important planters came from the southeast of England (including London) (Smith 1987 The Genesis of the Creole Languages of Surinam ; Smith 2009 In The handbook of pidgin and creole studies , pp. 98-129).This article is part of the theme issue 'Bridging cultural gaps: interdisciplinary studies in human cultural evolution'. © 2018 The Author(s).

  9. Feature integration across space, time, and orientation

    PubMed Central

    Otto, Thomas U.; Öğmen, Haluk; Herzog, Michael H.

    2012-01-01

    The perception of a visual target can be strongly influenced by flanking stimuli. In static displays, performance on the target improves when the distance to the flanking elements increases- proposedly because feature pooling and integration vanishes with distance. Here, we studied feature integration with dynamic stimuli. We show that features of single elements presented within a continuous motion stream are integrated largely independent of spatial distance (and orientation). Hence, space based models of feature integration cannot be extended to dynamic stimuli. We suggest that feature integration is guided by perceptual grouping operations that maintain the identity of perceptual objects over space and time. PMID:19968428

  10. Production Functions for Water Delivery Systems: Analysis and Estimation Using Dual Cost Function and Implicit Price Specifications

    NASA Astrophysics Data System (ADS)

    Teeples, Ronald; Glyer, David

    1987-05-01

    Both policy and technical analysis of water delivery systems have been based on cost functions that are inconsistent with or are incomplete representations of the neoclassical production functions of economics. We present a full-featured production function model of water delivery which can be estimated from a multiproduct, dual cost function. The model features implicit prices for own-water inputs and is implemented as a jointly estimated system of input share equations and a translog cost function. Likelihood ratio tests are performed showing that a minimally constrained, full-featured production function is a necessary specification of the water delivery operations in our sample. This, plus the model's highly efficient and economically correct parameter estimates, confirms the usefulness of a production function approach to modeling the economic activities of water delivery systems.

  11. Audio-visual synchrony and feature-selective attention co-amplify early visual processing.

    PubMed

    Keitel, Christian; Müller, Matthias M

    2016-05-01

    Our brain relies on neural mechanisms of selective attention and converging sensory processing to efficiently cope with rich and unceasing multisensory inputs. One prominent assumption holds that audio-visual synchrony can act as a strong attractor for spatial attention. Here, we tested for a similar effect of audio-visual synchrony on feature-selective attention. We presented two superimposed Gabor patches that differed in colour and orientation. On each trial, participants were cued to selectively attend to one of the two patches. Over time, spatial frequencies of both patches varied sinusoidally at distinct rates (3.14 and 3.63 Hz), giving rise to pulse-like percepts. A simultaneously presented pure tone carried a frequency modulation at the pulse rate of one of the two visual stimuli to introduce audio-visual synchrony. Pulsed stimulation elicited distinct time-locked oscillatory electrophysiological brain responses. These steady-state responses were quantified in the spectral domain to examine individual stimulus processing under conditions of synchronous versus asynchronous tone presentation and when respective stimuli were attended versus unattended. We found that both, attending to the colour of a stimulus and its synchrony with the tone, enhanced its processing. Moreover, both gain effects combined linearly for attended in-sync stimuli. Our results suggest that audio-visual synchrony can attract attention to specific stimulus features when stimuli overlap in space.

  12. Portable data collection device with self identifying probe

    DOEpatents

    French, P.D.

    1998-11-17

    The present invention provides a portable data collection device that has a variety of sensors that are interchangeable with a variety of input ports in the device. The various sensors include a data identification feature that provides information to the device regarding the type of physical data produced by each sensor and therefore the type of sensor itself. The data identification feature enables the device to locate the input port where the sensor is connected and self adjust when a sensor is removed or replaced. The device is able to collect physical data, whether or not a function of time. The sensor may also store a unique sensor identifier. 13 figs.

  13. Portable data collection device with self identifying probe

    DOEpatents

    French, Patrick D.

    1998-01-01

    The present invention provides a portable data collection device that has a variety of sensors that are interchangeable with a variety of input ports in the device. The various sensors include a data identification feature that provides information to the device regarding the type of physical data produced by each sensor and therefore the type of sensor itself. The data identification feature enables the device to locate the input port where the sensor is connected and self adjust when a sensor is removed or replaced. The device is able to collect physical data, whether or not a function of time. The sensor may also store a unique sensor identifier.

  14. Computer program for plotting and fairing wind-tunnel data

    NASA Technical Reports Server (NTRS)

    Morgan, H. L., Jr.

    1983-01-01

    A detailed description of the Langley computer program PLOTWD which plots and fairs experimental wind-tunnel data is presented. The program was written for use primarily on the Langley CDC computer and CALCOMP plotters. The fundamental operating features of the program are that the input data are read and written to a random-access file for use during program execution, that the data for a selected run can be sorted and edited to delete duplicate points, and that the data can be plotted and faired using tension splines, least-squares polynomial, or least-squares cubic-spline curves. The most noteworthy feature of the program is the simplicity of the user-supplied input requirements. Several subroutines are also included that can be used to draw grid lines, zero lines, axis scale values and lables, and legends. A detailed description of the program operational features and each sub-program are presented. The general application of the program is also discussed together with the input and output for two typical plot types. A listing of the program code, user-guide, and output description are presented in appendices. The program has been in use at Langley for several years and has proven to be both easy to use and versatile.

  15. Wood texture classification by fuzzy neural networks

    NASA Astrophysics Data System (ADS)

    Gonzaga, Adilson; de Franca, Celso A.; Frere, Annie F.

    1999-03-01

    The majority of scientific papers focusing on wood classification for pencil manufacturing take into account defects and visual appearance. Traditional methodologies are base don texture analysis by co-occurrence matrix, by image modeling, or by tonal measures over the plate surface. In this work, we propose to classify plates of wood without biological defects like insect holes, nodes, and cracks, by analyzing their texture. By this methodology we divide the plate image in several rectangular windows or local areas and reduce the number of gray levels. From each local area, we compute the histogram of difference sand extract texture features, given them as input to a Local Neuro-Fuzzy Network. Those features are from the histogram of differences instead of the image pixels due to their better performance and illumination independence. Among several features like media, contrast, second moment, entropy, and IDN, the last three ones have showed better results for network training. Each LNN output is taken as input to a Partial Neuro-Fuzzy Network (PNFN) classifying a pencil region on the plate. At last, the outputs from the PNFN are taken as input to a Global Fuzzy Logic doing the plate classification. Each pencil classification within the plate is done taking into account each quality index.

  16. A Procedure for Extending Input Selection Algorithms to Low Quality Data in Modelling Problems with Application to the Automatic Grading of Uploaded Assignments

    PubMed Central

    Otero, José; Palacios, Ana; Suárez, Rosario; Junco, Luis

    2014-01-01

    When selecting relevant inputs in modeling problems with low quality data, the ranking of the most informative inputs is also uncertain. In this paper, this issue is addressed through a new procedure that allows the extending of different crisp feature selection algorithms to vague data. The partial knowledge about the ordinal of each feature is modelled by means of a possibility distribution, and a ranking is hereby applied to sort these distributions. It will be shown that this technique makes the most use of the available information in some vague datasets. The approach is demonstrated in a real-world application. In the context of massive online computer science courses, methods are sought for automatically providing the student with a qualification through code metrics. Feature selection methods are used to find the metrics involved in the most meaningful predictions. In this study, 800 source code files, collected and revised by the authors in classroom Computer Science lectures taught between 2013 and 2014, are analyzed with the proposed technique, and the most relevant metrics for the automatic grading task are discussed. PMID:25114967

  17. Uniform-penalty inversion of multiexponential decay data. II. Data spacing, T(2) data, systemic data errors, and diagnostics.

    PubMed

    Borgia, G C; Brown, R J; Fantazzini, P

    2000-12-01

    The basic method of UPEN (uniform penalty inversion of multiexponential decay data) is given in an earlier publication (Borgia et al., J. Magn. Reson. 132, 65-77 (1998)), which also discusses the effects of noise, constraints, and smoothing on the resolution or apparent resolution of features of a computed distribution of relaxation times. UPEN applies negative feedback to a regularization penalty, allowing stronger smoothing for a broad feature than for a sharp line. This avoids unnecessarily broadening the sharp line and/or breaking the wide peak or tail into several peaks that the relaxation data do not demand to be separate. The experimental and artificial data presented earlier were T(1) data, and all had fixed data spacings, uniform in log-time. However, for T(2) data, usually spaced uniformly in linear time, or for data spaced in any manner, we have found that the data spacing does not enter explicitly into the computation. The present work shows the extension of UPEN to T(2) data, including the averaging of data in windows and the use of the corresponding weighting factors in the computation. Measures are implemented to control portions of computed distributions extending beyond the data range. The input smoothing parameters in UPEN are normally fixed, rather than data dependent. A major problem arises, especially at high signal-to-noise ratios, when UPEN is applied to data sets with systematic errors due to instrumental nonidealities or adjustment problems. For instance, a relaxation curve for a wide line can be narrowed by an artificial downward bending of the relaxation curve. Diagnostic parameters are generated to help identify data problems, and the diagnostics are applied in several examples, with particular attention to the meaningful resolution of two closely spaced peaks in a distribution of relaxation times. Where feasible, processing with UPEN in nearly real time should help identify data problems while further instrument adjustments can still be made. The need for the nonnegative constraint is greatly reduced in UPEN, and preliminary processing without this constraint helps identify data sets for which application of the nonnegative constraint is too expensive in terms of error of fit for the data set to represent sums of decaying positive exponentials plus random noise. Copyright 2000 Academic Press.

  18. Representation learning: a unified deep learning framework for automatic prostate MR segmentation.

    PubMed

    Liao, Shu; Gao, Yaozong; Oto, Aytekin; Shen, Dinggang

    2013-01-01

    Image representation plays an important role in medical image analysis. The key to the success of different medical image analysis algorithms is heavily dependent on how we represent the input data, namely features used to characterize the input image. In the literature, feature engineering remains as an active research topic, and many novel hand-crafted features are designed such as Haar wavelet, histogram of oriented gradient, and local binary patterns. However, such features are not designed with the guidance of the underlying dataset at hand. To this end, we argue that the most effective features should be designed in a learning based manner, namely representation learning, which can be adapted to different patient datasets at hand. In this paper, we introduce a deep learning framework to achieve this goal. Specifically, a stacked independent subspace analysis (ISA) network is adopted to learn the most effective features in a hierarchical and unsupervised manner. The learnt features are adapted to the dataset at hand and encode high level semantic anatomical information. The proposed method is evaluated on the application of automatic prostate MR segmentation. Experimental results show that significant segmentation accuracy improvement can be achieved by the proposed deep learning method compared to other state-of-the-art segmentation approaches.

  19. Method for maximizing the brightness of the bunches in a particle injector by converting a highly space-charged beam to a relativistic and emittance-dominated beam

    DOEpatents

    Hannon, Fay

    2016-08-02

    A method for maximizing the brightness of the bunches in a particle injector by converting a highly space-charged beam to a relativistic and emittance-dominated beam. The method includes 1) determining the bunch charge and the initial kinetic energy of the highly space-charge dominated input beam; 2) applying the bunch charge and initial kinetic energy properties of the highly space-charge dominated input beam to determine the number of accelerator cavities required to accelerate the bunches to relativistic speed; 3) providing the required number of accelerator cavities; and 4) setting the gradient of the radio frequency (RF) cavities; and 5) operating the phase of the accelerator cavities between -90 and zero degrees of the sinusoid of phase to simultaneously accelerate and bunch the charged particles to maximize brightness, and until the beam is relativistic and emittance-dominated.

  20. Summary of astronaut inputs on automation and robotics for Space Station Freedom

    NASA Technical Reports Server (NTRS)

    Weeks, David J.

    1990-01-01

    Astronauts and payload specialists present specific recommendations in the form of an overview that relate to the use of automation and robotics on the Space Station Freedom. The inputs are based on on-orbit operations experience, time requirements for crews, and similar crew-specific knowledge that address the impacts of automation and robotics on productivity. Interview techniques and specific questionnaire results are listed, and the majority of the responses indicate that incorporating automation and robotics to some extent and with human backup can improve productivity. Specific support is found for the use of advanced automation and EVA robotics on the Space Station Freedom and for the use of advanced automation on ground-based stations. Ground-based control of in-flight robotics is required, and Space Station activities and crew tasks should be analyzed to assess the systems engineering approach for incorporating automation and robotics.

  1. GD SDR Automatic Gain Control Characterization Testing

    NASA Technical Reports Server (NTRS)

    Nappier, Jennifer M.; Briones, Janette C.

    2013-01-01

    The General Dynamics (GD) S-Band software defined radio (SDR) in the Space Communications and Navigation (SCAN) Testbed on the International Space Station (ISS) will provide experimenters an opportunity to develop and demonstrate experimental waveforms in space. The GD SDR platform and initial waveform were characterized on the ground before launch and the data will be compared to the data that will be collected during on-orbit operations. A desired function of the SDR is to estimate the received signal to noise ratio (SNR), which would enable experimenters to better determine on-orbit link conditions. The GD SDR does not have an SNR estimator, but it does have an analog and a digital automatic gain control (AGC). The AGCs can be used to estimate the SDR input power which can be converted into a SNR. Tests were conducted to characterize the AGC response to changes in SDR input power and temperature. This purpose of this paper is to describe the tests that were conducted, discuss the results showi ng how the AGCs relate to the SDR input power, and provide recommendations for AGC testing and characterization.

  2. GD SDR Automatic Gain Control Characterization Testing

    NASA Technical Reports Server (NTRS)

    Nappier, Jennifer M.; Briones, Janette C.

    2013-01-01

    The General Dynamics (GD) S-Band software defined radio (SDR) in the Space Communications and Navigation (SCAN) Testbed on the International Space Station (ISS) will provide experimenters an opportunity to develop and demonstrate experimental waveforms in space. The GD SDR platform and initial waveform were characterized on the ground before launch and the data will be compared to the data that will be collected during on-orbit operations. A desired function of the SDR is to estimate the received signal to noise ratio (SNR), which would enable experimenters to better determine on-orbit link conditions. The GD SDR does not have an SNR estimator, but it does have an analog and a digital automatic gain control (AGC). The AGCs can be used to estimate the SDR input power which can be converted into a SNR. Tests were conducted to characterize the AGC response to changes in SDR input power and temperature. This purpose of this paper is to describe the tests that were conducted, discuss the results showing how the AGCs relate to the SDR input power, and provide recommendations for AGC testing and characterization.

  3. Monitoring of sludge dewatering equipment by image classification

    NASA Astrophysics Data System (ADS)

    Maquine de Souza, Sandro; Grandvalet, Yves; Denoeux, Thierry

    2004-11-01

    Belt filter presses represent an economical means to dewater the residual sludge generated in wastewater treatment plants. In order to assure maximal water removal, the raw sludge is mixed with a chemical conditioner prior to being fed into the belt filter press. When the conditioner is properly dosed, the sludge acquires a coarse texture, with space between flocs. This information was exploited for the development of a software sensor, where digital images are the input signal, and the output is a numeric value proportional to the dewatered sludge dry content. Three families of features were used to characterize the textures. Gabor filtering, wavelet decomposition and co-occurrence matrix computation were the techniques used. A database of images, ordered by their corresponding dry contents, was used to calibrate the model that calculates the sensor output. The images were separated in groups that correspond to single experimental sessions. With the calibrated model, all images were correctly ranked within an experiment session. The results were very similar regardless of the family of features used. The output can be fed to a control system, or, in the case of fixed experiment conditions, it can be used to directly estimate the dewatered sludge dry content.

  4. An Efficient Method to Detect Mutual Overlap of a Large Set of Unordered Images for Structure-From

    NASA Astrophysics Data System (ADS)

    Wang, X.; Zhan, Z. Q.; Heipke, C.

    2017-05-01

    Recently, low-cost 3D reconstruction based on images has become a popular focus of photogrammetry and computer vision research. Methods which can handle an arbitrary geometric setup of a large number of unordered and convergent images are of particular interest. However, determining the mutual overlap poses a considerable challenge. We propose a new method which was inspired by and improves upon methods employing random k-d forests for this task. Specifically, we first derive features from the images and then a random k-d forest is used to find the nearest neighbours in feature space. Subsequently, the degree of similarity between individual images, the image overlaps and thus images belonging to a common block are calculated as input to a structure-from-motion (sfm) pipeline. In our experiments we show the general applicability of the new method and compare it with other methods by analyzing the time efficiency. Orientations and 3D reconstructions were successfully conducted with our overlap graphs by sfm. The results show a speed-up of a factor of 80 compared to conventional pairwise matching, and of 8 and 2 compared to the VocMatch approach using 1 and 4 CPU, respectively.

  5. Global MHD Modeling of Auroral Conjugacy for Different IMF Conditions

    NASA Astrophysics Data System (ADS)

    Hesse, M.; Kuznetsova, M. M.; Liu, Y. H.; Birn, J.; Rastaetter, L.

    2016-12-01

    The question whether auroral features are conjugate or not, and the search for the underlying scientific causes is of high interest in magnetospheric and ionospheric physics. Consequently, this topic has attracted considerable attention in space-based observations of auroral features, and it has inspired a number of theoretical ideas and related modeling activities. Potential contributing factors to the presence or absence of auroral conjugacy include precipitation asymmetries in case of the diffuse aurora, inter-hemispherical conductivity differences, magnetospheric asymmetries brought about by, e.g., dipole tilt, corotation, or IMF By, and, finally, asymmetries in field-aligned current generation primarily in the nightside magnetosphere. In this presentation, we will analyze high-resolution, global MHD simulations of magnetospheric dynamics, with emphasis on auroral conjugacy. For the purpose of this study, we define controlled conditions by selecting solstice times with steady solar wind input, the latter of which includes an IMF rotation from purely southward to east-westward. Conductivity models will include both auroral precipaition proxies as well as the effects of the aysmmetric daylight. We will analyze these simulations with respect to conjugacies or the lack thereof, and study the role of the effects above in determing the former.

  6. Gas chimney detection based on improving the performance of combined multilayer perceptron and support vector classifier

    NASA Astrophysics Data System (ADS)

    Hashemi, H.; Tax, D. M. J.; Duin, R. P. W.; Javaherian, A.; de Groot, P.

    2008-11-01

    Seismic object detection is a relatively new field in which 3-D bodies are visualized and spatial relationships between objects of different origins are studied in order to extract geologic information. In this paper, we propose a method for finding an optimal classifier with the help of a statistical feature ranking technique and combining different classifiers. The method, which has general applicability, is demonstrated here on a gas chimney detection problem. First, we evaluate a set of input seismic attributes extracted at locations labeled by a human expert using regularized discriminant analysis (RDA). In order to find the RDA score for each seismic attribute, forward and backward search strategies are used. Subsequently, two non-linear classifiers: multilayer perceptron (MLP) and support vector classifier (SVC) are run on the ranked seismic attributes. Finally, to capitalize on the intrinsic differences between both classifiers, the MLP and SVC results are combined using logical rules of maximum, minimum and mean. The proposed method optimizes the ranked feature space size and yields the lowest classification error in the final combined result. We will show that the logical minimum reveals gas chimneys that exhibit both the softness of MLP and the resolution of SVC classifiers.

  7. Advanced biologically plausible algorithms for low-level image processing

    NASA Astrophysics Data System (ADS)

    Gusakova, Valentina I.; Podladchikova, Lubov N.; Shaposhnikov, Dmitry G.; Markin, Sergey N.; Golovan, Alexander V.; Lee, Seong-Whan

    1999-08-01

    At present, in computer vision, the approach based on modeling the biological vision mechanisms is extensively developed. However, up to now, real world image processing has no effective solution in frameworks of both biologically inspired and conventional approaches. Evidently, new algorithms and system architectures based on advanced biological motivation should be developed for solution of computational problems related to this visual task. Basic problems that should be solved for creation of effective artificial visual system to process real world imags are a search for new algorithms of low-level image processing that, in a great extent, determine system performance. In the present paper, the result of psychophysical experiments and several advanced biologically motivated algorithms for low-level processing are presented. These algorithms are based on local space-variant filter, context encoding visual information presented in the center of input window, and automatic detection of perceptually important image fragments. The core of latter algorithm are using local feature conjunctions such as noncolinear oriented segment and composite feature map formation. Developed algorithms were integrated into foveal active vision model, the MARR. It is supposed that proposed algorithms may significantly improve model performance while real world image processing during memorizing, search, and recognition.

  8. Comparison of test particle acceleration in torsional spine and fan reconnection regimes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hosseinpour, M., E-mail: hosseinpour@tabrizu.ac.ir; Mehdizade, M.; Mohammadi, M. A.

    2014-10-15

    Magnetic reconnection is a common phenomenon taking place in astrophysical and space plasmas, especially in solar flares which are rich sources of highly energetic particles. Torsional spine and fan reconnections are important mechanisms proposed for steady-state three-dimensional null-point reconnection. By using the magnetic and electric fields for these regimes, we numerically investigate the features of test particle acceleration in both regimes with input parameters for the solar corona. By comparison, torsional spine reconnection is found to be more efficient than torsional fan reconnection in an acceleration of a proton to a high kinetic energy. A proton can gain as highmore » as 100 MeV of relativistic kinetic energy within only a few milliseconds. Moreover, in torsional spine reconnection, an accelerated particle can escape either along the spine axis or on the fan plane depending on its injection position. However, in torsional fan reconnection, the particle is only allowed to accelerate along the spine axis. In addition, in both regimes, the particle's trajectory and final kinetic energy depend on the injection position but adopting either spatially uniform or non-uniform localized plasma resistivity does not much influence the features of trajectory.« less

  9. Detection of Neuron Membranes in Electron Microscopy Images Using Multi-scale Context and Radon-Like Features

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seyedhosseini, Mojtaba; Kumar, Ritwik; Jurrus, Elizabeth R.

    2011-10-01

    Automated neural circuit reconstruction through electron microscopy (EM) images is a challenging problem. In this paper, we present a novel method that exploits multi-scale contextual information together with Radon-like features (RLF) to learn a series of discriminative models. The main idea is to build a framework which is capable of extracting information about cell membranes from a large contextual area of an EM image in a computationally efficient way. Toward this goal, we extract RLF that can be computed efficiently from the input image and generate a scale-space representation of the context images that are obtained at the output ofmore » each discriminative model in the series. Compared to a single-scale model, the use of a multi-scale representation of the context image gives the subsequent classifiers access to a larger contextual area in an effective way. Our strategy is general and independent of the classifier and has the potential to be used in any context based framework. We demonstrate that our method outperforms the state-of-the-art algorithms in detection of neuron membranes in EM images.« less

  10. Emg Amplitude Estimators Based on Probability Distribution for Muscle-Computer Interface

    NASA Astrophysics Data System (ADS)

    Phinyomark, Angkoon; Quaine, Franck; Laurillau, Yann; Thongpanja, Sirinee; Limsakul, Chusak; Phukpattaranont, Pornchai

    To develop an advanced muscle-computer interface (MCI) based on surface electromyography (EMG) signal, the amplitude estimations of muscle activities, i.e., root mean square (RMS) and mean absolute value (MAV) are widely used as a convenient and accurate input for a recognition system. Their classification performance is comparable to advanced and high computational time-scale methods, i.e., the wavelet transform. However, the signal-to-noise-ratio (SNR) performance of RMS and MAV depends on a probability density function (PDF) of EMG signals, i.e., Gaussian or Laplacian. The PDF of upper-limb motions associated with EMG signals is still not clear, especially for dynamic muscle contraction. In this paper, the EMG PDF is investigated based on surface EMG recorded during finger, hand, wrist and forearm motions. The results show that on average the experimental EMG PDF is closer to a Laplacian density, particularly for male subject and flexor muscle. For the amplitude estimation, MAV has a higher SNR, defined as the mean feature divided by its fluctuation, than RMS. Due to a same discrimination of RMS and MAV in feature space, MAV is recommended to be used as a suitable EMG amplitude estimator for EMG-based MCIs.

  11. Intelligent image processing for vegetation classification using multispectral LANDSAT data

    NASA Astrophysics Data System (ADS)

    Santos, Stewart R.; Flores, Jorge L.; Garcia-Torales, G.

    2015-09-01

    We propose an intelligent computational technique for analysis of vegetation imaging, which are acquired with multispectral scanner (MSS) sensor. This work focuses on intelligent and adaptive artificial neural network (ANN) methodologies that allow segmentation and classification of spectral remote sensing (RS) signatures, in order to obtain a high resolution map, in which we can delimit the wooded areas and quantify the amount of combustible materials present into these areas. This could provide important information to prevent fires and deforestation of wooded areas. The spectral RS input data, acquired by the MSS sensor, are considered in a random propagation remotely sensed scene with unknown statistics for each Thematic Mapper (TM) band. Performing high-resolution reconstruction and adding these spectral values with neighbor pixels information from each TM band, we can include contextual information into an ANN. The biggest challenge in conventional classifiers is how to reduce the number of components in the feature vector, while preserving the major information contained in the data, especially when the dimensionality of the feature space is high. Preliminary results show that the Adaptive Modified Neural Network method is a promising and effective spectral method for segmentation and classification in RS images acquired with MSS sensor.

  12. Effect of Heat Input on Inclusion Evolution Behavior in Heat-Affected Zone of EH36 Shipbuilding Steel

    NASA Astrophysics Data System (ADS)

    Sun, Jincheng; Zou, Xiaodong; Matsuura, Hiroyuki; Wang, Cong

    2018-03-01

    The effects of heat input parameters on inclusion and microstructure characteristics have been investigated using welding thermal simulations. Inclusion features from heat-affected zones (HAZs) were profiled. It was found that, under heat input of 120 kJ/cm, Al-Mg-Ti-O-(Mn-S) composite inclusions can act effectively as nucleation sites for acicular ferrites. However, this ability disappears when the heat input is increased to 210 kJ/cm. In addition, confocal scanning laser microscopy (CSLM) was used to document possible inclusion-microstructure interactions, shedding light on how inclusions assist beneficial transformations toward property enhancement.

  13. Easy boundary definition for EGUN

    NASA Astrophysics Data System (ADS)

    Becker, R.

    1989-06-01

    The relativistic electron optics program EGUN [1] has reached a broad distribution, and many users have asked for an easier way of boundary input. A preprocessor to EGUN has been developed that accepts polygonal input of boundary points, and offers features such as rounding off of corners, shifting and squeezing of electrodes and simple input of slanted Neumann boundaries. This preprocessor can either be used on a PC that is linked to a mainframe using the FORTRAN version of EGUN, or in connection with the version EGNc, which also runs on a PC. In any case, direct graphic response on the PC greatly facilitates the creation of correct input files for EGUN.

  14. Robust recognition of handwritten numerals based on dual cooperative network

    NASA Technical Reports Server (NTRS)

    Lee, Sukhan; Choi, Yeongwoo

    1992-01-01

    An approach to robust recognition of handwritten numerals using two operating parallel networks is presented. The first network uses inputs in Cartesian coordinates, and the second network uses the same inputs transformed into polar coordinates. How the proposed approach realizes the robustness to local and global variations of input numerals by handling inputs both in Cartesian coordinates and in its transformed Polar coordinates is described. The required network structures and its learning scheme are discussed. Experimental results show that by tracking only a small number of distinctive features for each teaching numeral in each coordinate, the proposed system can provide robust recognition of handwritten numerals.

  15. Effect of Heat Input on Inclusion Evolution Behavior in Heat-Affected Zone of EH36 Shipbuilding Steel

    NASA Astrophysics Data System (ADS)

    Sun, Jincheng; Zou, Xiaodong; Matsuura, Hiroyuki; Wang, Cong

    2018-06-01

    The effects of heat input parameters on inclusion and microstructure characteristics have been investigated using welding thermal simulations. Inclusion features from heat-affected zones (HAZs) were profiled. It was found that, under heat input of 120 kJ/cm, Al-Mg-Ti-O-(Mn-S) composite inclusions can act effectively as nucleation sites for acicular ferrites. However, this ability disappears when the heat input is increased to 210 kJ/cm. In addition, confocal scanning laser microscopy (CSLM) was used to document possible inclusion-microstructure interactions, shedding light on how inclusions assist beneficial transformations toward property enhancement.

  16. Academic Language in Shared Book Reading: Parent and Teacher Input to Mono- and Bilingual Preschoolers

    ERIC Educational Resources Information Center

    Aarts, Rian; Demir-Vegter, Serpil; Kurvers, Jeanne; Henrichs, Lotte

    2016-01-01

    The current study examined academic language (AL) input of mothers and teachers to 15 monolingual Dutch and 15 bilingual Turkish-Dutch 4- to 6-year-old children and its relationships with the children's language development. At two times, shared book reading was videotaped and analyzed for academic features: lexical diversity, syntactic…

  17. Sequential and Mixed Genetic Algorithm and Learning Automata (SGALA, MGALA) for Feature Selection in QSAR

    PubMed Central

    MotieGhader, Habib; Gharaghani, Sajjad; Masoudi-Sobhanzadeh, Yosef; Masoudi-Nejad, Ali

    2017-01-01

    Feature selection is of great importance in Quantitative Structure-Activity Relationship (QSAR) analysis. This problem has been solved using some meta-heuristic algorithms such as GA, PSO, ACO and so on. In this work two novel hybrid meta-heuristic algorithms i.e. Sequential GA and LA (SGALA) and Mixed GA and LA (MGALA), which are based on Genetic algorithm and learning automata for QSAR feature selection are proposed. SGALA algorithm uses advantages of Genetic algorithm and Learning Automata sequentially and the MGALA algorithm uses advantages of Genetic Algorithm and Learning Automata simultaneously. We applied our proposed algorithms to select the minimum possible number of features from three different datasets and also we observed that the MGALA and SGALA algorithms had the best outcome independently and in average compared to other feature selection algorithms. Through comparison of our proposed algorithms, we deduced that the rate of convergence to optimal result in MGALA and SGALA algorithms were better than the rate of GA, ACO, PSO and LA algorithms. In the end, the results of GA, ACO, PSO, LA, SGALA, and MGALA algorithms were applied as the input of LS-SVR model and the results from LS-SVR models showed that the LS-SVR model had more predictive ability with the input from SGALA and MGALA algorithms than the input from all other mentioned algorithms. Therefore, the results have corroborated that not only is the predictive efficiency of proposed algorithms better, but their rate of convergence is also superior to the all other mentioned algorithms. PMID:28979308

  18. Sequential and Mixed Genetic Algorithm and Learning Automata (SGALA, MGALA) for Feature Selection in QSAR.

    PubMed

    MotieGhader, Habib; Gharaghani, Sajjad; Masoudi-Sobhanzadeh, Yosef; Masoudi-Nejad, Ali

    2017-01-01

    Feature selection is of great importance in Quantitative Structure-Activity Relationship (QSAR) analysis. This problem has been solved using some meta-heuristic algorithms such as GA, PSO, ACO and so on. In this work two novel hybrid meta-heuristic algorithms i.e. Sequential GA and LA (SGALA) and Mixed GA and LA (MGALA), which are based on Genetic algorithm and learning automata for QSAR feature selection are proposed. SGALA algorithm uses advantages of Genetic algorithm and Learning Automata sequentially and the MGALA algorithm uses advantages of Genetic Algorithm and Learning Automata simultaneously. We applied our proposed algorithms to select the minimum possible number of features from three different datasets and also we observed that the MGALA and SGALA algorithms had the best outcome independently and in average compared to other feature selection algorithms. Through comparison of our proposed algorithms, we deduced that the rate of convergence to optimal result in MGALA and SGALA algorithms were better than the rate of GA, ACO, PSO and LA algorithms. In the end, the results of GA, ACO, PSO, LA, SGALA, and MGALA algorithms were applied as the input of LS-SVR model and the results from LS-SVR models showed that the LS-SVR model had more predictive ability with the input from SGALA and MGALA algorithms than the input from all other mentioned algorithms. Therefore, the results have corroborated that not only is the predictive efficiency of proposed algorithms better, but their rate of convergence is also superior to the all other mentioned algorithms.

  19. Computing the modal mass from the state space model in combined experimental-operational modal analysis

    NASA Astrophysics Data System (ADS)

    Cara, Javier

    2016-05-01

    Modal parameters comprise natural frequencies, damping ratios, modal vectors and modal masses. In a theoretic framework, these parameters are the basis for the solution of vibration problems using the theory of modal superposition. In practice, they can be computed from input-output vibration data: the usual procedure is to estimate a mathematical model from the data and then to compute the modal parameters from the estimated model. The most popular models for input-output data are based on the frequency response function, but in recent years the state space model in the time domain has become popular among researchers and practitioners of modal analysis with experimental data. In this work, the equations to compute the modal parameters from the state space model when input and output data are available (like in combined experimental-operational modal analysis) are derived in detail using invariants of the state space model: the equations needed to compute natural frequencies, damping ratios and modal vectors are well known in the operational modal analysis framework, but the equation needed to compute the modal masses has not generated much interest in technical literature. These equations are applied to both a numerical simulation and an experimental study in the last part of the work.

  20. EnviroNET: On-line information for LDEF

    NASA Technical Reports Server (NTRS)

    Lauriente, Michael

    1993-01-01

    EnviroNET is an on-line, free-form database intended to provide a centralized repository for a wide range of technical information on environmentally induced interactions of use to Space Shuttle customers and spacecraft designers. It provides a user-friendly, menu-driven format on networks that are connected globally and is available twenty-four hours a day - every day. The information, updated regularly, includes expository text, tabular numerical data, charts and graphs, and models. The system pools space data collected over the years by NASA, USAF, other government research facilities, industry, universities, and the European Space Agency. The models accept parameter input from the user, then calculate and display the derived values corresponding to that input. In addition to the archive, interactive graphics programs are also available on space debris, the neutral atmosphere, radiation, magnetic fields, and the ionosphere. A user-friendly, informative interface is standard for all the models and includes a pop-up help window with information on inputs, outputs, and caveats. The system will eventually simplify mission analysis with analytical tools and deliver solutions for computationally intense graphical applications to do 'What if...' scenarios. A proposed plan for developing a repository of information from the Long Duration Exposure Facility (LDEF) for a user group is presented.

  1. Self-organizing map (SOM) of space acceleration measurement system (SAMS) data.

    PubMed

    Sinha, A; Smith, A D

    1999-01-01

    In this paper, space acceleration measurement system (SAMS) data have been classified using self-organizing map (SOM) networks without any supervision; i.e., no a priori knowledge is assumed regarding input patterns belonging to a certain class. Input patterns are created on the basis of power spectral densities of SAMS data. Results for SAMS data from STS-50 and STS-57 missions are presented. Following issues are discussed in details: impact of number of neurons, global ordering of SOM weight vectors, effectiveness of a SOM in data classification, and effects of shifting time windows in the generation of input patterns. The concept of 'cascade of SOM networks' is also developed and tested. It has been found that a SOM network can successfully classify SAMS data obtained during STS-50 and STS-57 missions.

  2. Self-organizing map (SOM) of space acceleration measurement system (SAMS) data

    NASA Technical Reports Server (NTRS)

    Sinha, A.; Smith, A. D.

    1999-01-01

    In this paper, space acceleration measurement system (SAMS) data have been classified using self-organizing map (SOM) networks without any supervision; i.e., no a priori knowledge is assumed regarding input patterns belonging to a certain class. Input patterns are created on the basis of power spectral densities of SAMS data. Results for SAMS data from STS-50 and STS-57 missions are presented. Following issues are discussed in details: impact of number of neurons, global ordering of SOM weight vectors, effectiveness of a SOM in data classification, and effects of shifting time windows in the generation of input patterns. The concept of 'cascade of SOM networks' is also developed and tested. It has been found that a SOM network can successfully classify SAMS data obtained during STS-50 and STS-57 missions.

  3. Asynchronous transfer mode distribution network by use of an optoelectronic VLSI switching chip.

    PubMed

    Lentine, A L; Reiley, D J; Novotny, R A; Morrison, R L; Sasian, J M; Beckman, M G; Buchholz, D B; Hinterlong, S J; Cloonan, T J; Richards, G W; McCormick, F B

    1997-03-10

    We describe a new optoelectronic switching system demonstration that implements part of the distribution fabric for a large asynchronous transfer mode (ATM) switch. The system uses a single optoelectronic VLSI modulator-based switching chip with more than 4000 optical input-outputs. The optical system images the input fibers from a two-dimensional fiber bundle onto this chip. A new optomechanical design allows the system to be mounted in a standard electronic equipment frame. A large section of the switch was operated as a 208-Mbits/s time-multiplexed space switch, which can serve as part of an ATM switch by use of an appropriate out-of-band controller. A larger section with 896 input light beams and 256 output beams was operated at 160 Mbits/s as a slowly reconfigurable space switch.

  4. Average BER analysis of SCM-based free-space optical systems by considering the effect of IM3 with OSSB signals under turbulence channels.

    PubMed

    Lim, Wansu; Cho, Tae-Sik; Yun, Changho; Kim, Kiseon

    2009-11-09

    In this paper, we derive the average bit error rate (BER) of subcarrier multiplexing (SCM)-based free space optics (FSO) systems using a dual-drive Mach-Zehnder modulator (DD-MZM) for optical single-sideband (OSSB) signals under atmospheric turbulence channels. In particular, we consider the third-order intermodulation (IM3), a significant performance degradation factor, in the case of high input signal power systems. The derived average BER, as a function of the input signal power and the scintillation index, is employed to determine the optimum number of SCM users upon the designing FSO systems. For instance, when the user number doubles, the input signal power decreases by almost 2 dBm under the log-normal and exponential turbulence channels at a given average BER.

  5. The Cube and the Poppy Flower: Participatory Approaches for Designing Technology-Enhanced Learning Spaces

    ERIC Educational Resources Information Center

    Casanova, Diogo; Mitchell, Paul

    2017-01-01

    This paper presents an alternative method for learning space design that is driven by user input. An exploratory study was undertaken at an English university with the aim of redesigning technology-enhanced learning spaces. Two provocative concepts were presented through participatory design workshops during which students and teachers reflected…

  6. Limitations in 4-Year-Old Children's Sensitivity to the Spacing among Facial Features

    ERIC Educational Resources Information Center

    Mondloch, Catherine J.; Thomson, Kendra

    2008-01-01

    Four-year-olds' sensitivity to differences among faces in the spacing of features was tested under 4 task conditions: judging distinctiveness when the external contour was visible and when it was occluded, simultaneous match-to-sample, and recognizing the face of a friend. In each task, the foil differed only in the spacing of features, and…

  7. The Research of Feature Extraction Method of Liver Pathological Image Based on Multispatial Mapping and Statistical Properties

    PubMed Central

    Liu, Huiling; Xia, Bingbing; Yi, Dehui

    2016-01-01

    We propose a new feature extraction method of liver pathological image based on multispatial mapping and statistical properties. For liver pathological images of Hematein Eosin staining, the image of R and B channels can reflect the sensitivity of liver pathological images better, while the entropy space and Local Binary Pattern (LBP) space can reflect the texture features of the image better. To obtain the more comprehensive information, we map liver pathological images to the entropy space, LBP space, R space, and B space. The traditional Higher Order Local Autocorrelation Coefficients (HLAC) cannot reflect the overall information of the image, so we propose an average correction HLAC feature. We calculate the statistical properties and the average gray value of pathological images and then update the current pixel value as the absolute value of the difference between the current pixel gray value and the average gray value, which can be more sensitive to the gray value changes of pathological images. Lastly the HLAC template is used to calculate the features of the updated image. The experiment results show that the improved features of the multispatial mapping have the better classification performance for the liver cancer. PMID:27022407

  8. Reduced basis ANOVA methods for partial differential equations with high-dimensional random inputs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liao, Qifeng, E-mail: liaoqf@shanghaitech.edu.cn; Lin, Guang, E-mail: guanglin@purdue.edu

    2016-07-15

    In this paper we present a reduced basis ANOVA approach for partial deferential equations (PDEs) with random inputs. The ANOVA method combined with stochastic collocation methods provides model reduction in high-dimensional parameter space through decomposing high-dimensional inputs into unions of low-dimensional inputs. In this work, to further reduce the computational cost, we investigate spatial low-rank structures in the ANOVA-collocation method, and develop efficient spatial model reduction techniques using hierarchically generated reduced bases. We present a general mathematical framework of the methodology, validate its accuracy and demonstrate its efficiency with numerical experiments.

  9. Automatic archaeological feature extraction from satellite VHR images

    NASA Astrophysics Data System (ADS)

    Jahjah, Munzer; Ulivieri, Carlo

    2010-05-01

    Archaeological applications need a methodological approach on a variable scale able to satisfy the intra-site (excavation) and the inter-site (survey, environmental research). The increased availability of high resolution and micro-scale data has substantially favoured archaeological applications and the consequent use of GIS platforms for reconstruction of archaeological landscapes based on remotely sensed data. Feature extraction of multispectral remotely sensing image is an important task before any further processing. High resolution remote sensing data, especially panchromatic, is an important input for the analysis of various types of image characteristics; it plays an important role in the visual systems for recognition and interpretation of given data. The methods proposed rely on an object-oriented approach based on a theory for the analysis of spatial structures called mathematical morphology. The term "morphology" stems from the fact that it aims at analysing object shapes and forms. It is mathematical in the sense that the analysis is based on the set theory, integral geometry, and lattice algebra. Mathematical morphology has proven to be a powerful image analysis technique; two-dimensional grey tone images are seen as three-dimensional sets by associating each image pixel with an elevation proportional to its intensity level. An object of known shape and size, called the structuring element, is then used to investigate the morphology of the input set. This is achieved by positioning the origin of the structuring element to every possible position of the space and testing, for each position, whether the structuring element either is included or has a nonempty intersection with the studied set. The shape and size of the structuring element must be selected according to the morphology of the searched image structures. Other two feature extraction techniques were used, eCognition and ENVI module SW, in order to compare the results. These techniques were applied to different archaeological sites in Turkmenistan (Nisa) and in Iraq (Babylon); a further change detection analysis was applied to the Babylon site using two HR images as a pre-post second gulf war. We had different results or outputs, taking into consideration the fact that the operative scale of sensed data determines the final result of the elaboration and the output of the information quality, because each of them was sensitive to specific shapes in each input image, we had mapped linear and nonlinear objects, updating archaeological cartography, automatic change detection analysis for the Babylon site. The discussion of these techniques has the objective to provide the archaeological team with new instruments for the orientation and the planning of a remote sensing application.

  10. Selective skin sensitivity changes and sensory reweighting following short-duration space flight.

    PubMed

    Lowrey, Catherine R; Perry, Stephen D; Strzalkowski, Nicholas D J; Williams, David R; Wood, Scott J; Bent, Leah R

    2014-03-15

    Skin sensory input from the foot soles is coupled with vestibular input to facilitate body orientation in a gravitational environment. Anecdotal observations suggest that foot sole skin becomes hypersensitive following space flight. The veritable level of skin sensitivity and its impact on postural disequilibrium observed post space flight have not been documented. Skin sensitivity of astronauts (n = 11) was measured as vibration perception at the great toe, fifth metatarsal and heel. Frequencies targeted four classes of receptors: 3 and 25 Hz for slow-adapting (SA) receptors and 60 and 250 Hz for fast-adapting (FA) receptors. Data were collected pre- and post-space flight. We hypothesized that skin sensitivity would increase post-space flight and correlate to balance measures. Decreased skin sensitivity was found on landing day at 3 and 25 Hz on the great toe. Hypersensitivity was found for a subset of astronauts (n = 6) with significantly increased sensitivity to 250 Hz at the heel. This subset displayed a greater reduction in computerized dynamic posturography (CDP) equilibrium (EQ) scores (-54%) on landing vs. non-hypersensitive participants (-11%). Observed hyposensitivity of SA (pressure) receptors may indicate a strategy to reduce pressure input during periods of unloading. Hypersensitivity of FAs coupled with reduced EQ scores may reflect targeted sensory reweighting. Altered gravito-inertial environments reduce vestibular function in balance control which may trigger increased weighting of FAs (that signal foot contact, slips). Understanding modulations to skin sensitivity has translational implications for mitigating postural disequilibrium following space flight and for on-Earth preventative strategies for imbalance in older adults.

  11. Jupiter System Data Analysis Program: Mechanisms, Manifestation, and Implications of Cryomagmatism on Europa

    NASA Technical Reports Server (NTRS)

    Fagents, Sarah A.

    2003-01-01

    The objectives of the work completed under NASA Grant NAG5-8898 were (i) to document and characterize the low-albedo diffuse surfaces associated with triple bands and lenticulae, (ii) to determine their mechanisms of formation, and (iii) to assess the implications of these features for the resurfacing (in space and time) of the Europa and the nature of the Europan interior. The approach involved a combination of processing and analysis of Solid State Imaging data returned by the Galileo spacecraft during the primary and extended mission phases, together with numerical modeling of the physical processes interpreted to the observed features. We have modeled the formation of Europan triple explosive venting of cryoclastic material from bands and lenticulae halos by two processes: (i) a liquid layer in the Europan interior, and (ii) lag deposit formation by the thermal influence of subsurface cryomagmatic intrusions. We favor the latter hypothesis for explaining these features, and further suggest that a liquid water or brine intrusion is required to provide sufficient lateral heating of surface ice to explain the 25 km size of the largest features. (Solid ice diapirs, even under the most favorable conditions, become thermally exhausted before they heat significant lateral distances). We argue that water circulating in open fractures, or repeated cryomagmatic 'diking' events would provide sufficient thermal input to produce the observed features. Thus our work argues for the existence of a liquid beneath Europa's surface. Our results might most easily be explained by the presence of a continuous liquid layer (the putative Europan ocean); this would concur with the findings of the Galileo magnetometer team. However, we cannot rule out the possibility that discrete liquid pockets provide injections of fluid closer to the surface.

  12. Experimental evaluation of a unique radiometer for use in solar simulation testing

    NASA Technical Reports Server (NTRS)

    Richmond, R. G.

    1978-01-01

    The vane radiometer is designed to operate over the range 0-1 solar constant and is capable of withstanding temperatures over the range -200 to +175 C. Two of these radiometers, for use in the Johnson Space Center's largest space simulator, have been evaluated for: (1) thermal sensitivity with no solar input, (2) linearity as a function of solar simulation input, and (3) output drift as a function of time. The minimum sensitivity was measured to be approximately 25.5 mV/solar constant. An unusual effect in the pressure range 760 to 1.0 torr is discussed.

  13. Simple, accurate formula for the average bit error probability of multiple-input multiple-output free-space optical links over negative exponential turbulence channels.

    PubMed

    Peppas, Kostas P; Lazarakis, Fotis; Alexandridis, Antonis; Dangakis, Kostas

    2012-08-01

    In this Letter we investigate the error performance of multiple-input multiple-output free-space optical communication systems employing intensity modulation/direct detection and operating over strong atmospheric turbulence channels. Atmospheric-induced strong turbulence fading is modeled using the negative exponential distribution. For the considered system, an approximate yet accurate analytical expression for the average bit error probability is derived and an efficient method for its numerical evaluation is proposed. Numerically evaluated and computer simulation results are further provided to demonstrate the validity of the proposed mathematical analysis.

  14. A multi-hypothesis tracker for clicking whales.

    PubMed

    Baggenstoss, Paul M

    2015-05-01

    This paper describes a tracker specially designed to track clicking beaked whales using widely spaced bottom-mounted hydrophones, although it can be adapted to different species and sensors. The input to the tracker is a sequence of static localization solutions obtained using time difference of arrival information at widely spaced hydrophones. To effectively handle input localizations with high ambiguity, the tracker is based on multi-hypothesis tracker concepts, so it considers all potential association hypotheses and keeps a large number of potential tracks in memory. The method is demonstrated on actual data and shown to successfully track multiple beaked whales at depth.

  15. Simulation of interaction between ground water in an alluvial aquifer and surface water in a large braided river

    USGS Publications Warehouse

    Leake, S.A.; Lilly, M.R.

    1995-01-01

    The Fairbanks, Alaska, area has many contaminated sites in a shallow alluvial aquifer. A ground-water flow model is being developed using the MODFLOW finite-difference ground-water flow model program with the River Package. The modeled area is discretized in the horizontal dimensions into 118 rows and 158 columns of approximately 150-meter square cells. The fine grid spacing has the advantage of providing needed detail at the contaminated sites and surface-water features that bound the aquifer. However, the fine spacing of cells adds difficulty to simulating interaction between the aquifer and the large, braided Tanana River. In particular, the assignment of a river head is difficult if cells are much smaller than the river width. This was solved by developing a procedure for interpolating and extrapolating river head using a river distance function. Another problem is that future transient simulations would require excessive numbers of input records using the current version of the River Package. The proposed solution to this problem is to modify the River Package to linearly interpolate river head for time steps within each stress period, thereby reducing the number of stress periods required.

  16. Modeling Exoplanetary Haze and Cloud Effects for Transmission Spectroscopy in the TRAPPIST-1 System

    NASA Astrophysics Data System (ADS)

    Moran, Sarah E.; Horst, Sarah M.; Lewis, Nikole K.; Batalha, Natasha E.; de Wit, Julien

    2018-01-01

    We present theoretical transmission spectra of the planets TRAPPIST-1d, e, f, and g using a version of the CaltecH Inverse ModEling and Retrieval Algorithms (CHIMERA) atmospheric modeling code. We use particle size, aerosol production rates, and aerosol composition inputs from recent laboratory experiments relevant for the TRAPPIST-1 system to constrain cloud and haze behavior and their effects on transmission spectra. We explore these cloud and haze cases for a variety of theoretical atmospheric compositions including hydrogen-, nitrogen-, and carbon dioxide-dominated atmospheres. Then, we demonstrate the feasibility of physically-motivated, laboratory-supported clouds and hazes to obscure spectral features at wavelengths and resolutions relevant to instruments on the Hubble Space Telescope and the upcoming James Webb Space Telescope. Lastly, with laboratory based constraints of haze production rates for terrestrial exoplanets, we constrain possible bulk atmospheric compositions of the TRAPPIST-1 planets based on current observations. We show that continued collection of optical data, beyond the supported wavelength range of the James Webb Telescope, is necessary to explore the full effect of hazes for transmission spectra of exoplanetary atmospheres like the TRAPPIST-1 system.

  17. Topological Interference Management for K-User Downlink Massive MIMO Relay Network Channel.

    PubMed

    Selvaprabhu, Poongundran; Chinnadurai, Sunil; Li, Jun; Lee, Moon Ho

    2017-08-17

    In this paper, we study the emergence of topological interference alignment and the characterizing features of a multi-user broadcast interference relay channel. We propose an alternative transmission strategy named the relay space-time interference alignment (R-STIA) technique, in which a K -user multiple-input-multiple-output (MIMO) interference channel has massive antennas at the transmitter and relay. Severe interference from unknown transmitters affects the downlink relay network channel and degrades the system performance. An additional (unintended) receiver is introduced in the proposed R-STIA technique to overcome the above problem, since it has the ability to decode the desired signals for the intended receiver by considering cooperation between the receivers. The additional receiver also helps in recovering and reconstructing the interference signals with limited channel state information at the relay (CSIR). The Alamouti space-time transmission technique and minimum mean square error (MMSE) linear precoder are also used in the proposed scheme to detect the presence of interference signals. Numerical results show that the proposed R-STIA technique achieves a better performance in terms of the bit error rate (BER) and sum-rate compared to the existing broadcast channel schemes.

  18. Topological Interference Management for K-User Downlink Massive MIMO Relay Network Channel

    PubMed Central

    Li, Jun; Lee, Moon Ho

    2017-01-01

    In this paper, we study the emergence of topological interference alignment and the characterizing features of a multi-user broadcast interference relay channel. We propose an alternative transmission strategy named the relay space-time interference alignment (R-STIA) technique, in which a K-user multiple-input-multiple-output (MIMO) interference channel has massive antennas at the transmitter and relay. Severe interference from unknown transmitters affects the downlink relay network channel and degrades the system performance. An additional (unintended) receiver is introduced in the proposed R-STIA technique to overcome the above problem, since it has the ability to decode the desired signals for the intended receiver by considering cooperation between the receivers. The additional receiver also helps in recovering and reconstructing the interference signals with limited channel state information at the relay (CSIR). The Alamouti space-time transmission technique and minimum mean square error (MMSE) linear precoder are also used in the proposed scheme to detect the presence of interference signals. Numerical results show that the proposed R-STIA technique achieves a better performance in terms of the bit error rate (BER) and sum-rate compared to the existing broadcast channel schemes. PMID:28817071

  19. HAL/S programmer's guide. [for space shuttle project

    NASA Technical Reports Server (NTRS)

    Newbold, P. M.; Hotz, R. L.

    1974-01-01

    The structure and symbology of the HAL/S programming language are described; this language is to be used among the flight software for the space shuttle project. The data declaration, input/output statements, and replace statements are also discussed.

  20. Methodology for the AutoRegressive Planet Search (ARPS) Project

    NASA Astrophysics Data System (ADS)

    Feigelson, Eric; Caceres, Gabriel; ARPS Collaboration

    2018-01-01

    The detection of periodic signals of transiting exoplanets is often impeded by the presence of aperiodic photometric variations. This variability is intrinsic to the host star in space-based observations (typically arising from magnetic activity) and from observational conditions in ground-based observations. The most common statistical procedures to remove stellar variations are nonparametric, such as wavelet decomposition or Gaussian Processes regression. However, many stars display variability with autoregressive properties, wherein later flux values are correlated with previous ones. Providing the time series is evenly spaced, parametric autoregressive models can prove very effective. Here we present the methodology of the Autoregessive Planet Search (ARPS) project which uses Autoregressive Integrated Moving Average (ARIMA) models to treat a wide variety of stochastic short-memory processes, as well as nonstationarity. Additionally, we introduce a planet-search algorithm to detect periodic transits in the time-series residuals after application of ARIMA models. Our matched-filter algorithm, the Transit Comb Filter (TCF), replaces the traditional box-fitting step. We construct a periodogram based on the TCF to concentrate the signal of these periodic spikes. Various features of the original light curves, the ARIMA fits, the TCF periodograms, and folded light curves at peaks of the TCF periodogram can then be collected to provide constraints for planet detection. These features provide input into a multivariate classifier when a training set is available. The ARPS procedure has been applied NASA's Kepler mission observations of ~200,000 stars (Caceres, Dissertation Talk, this meeting) and will be applied in the future to other datasets.

Top