Sample records for proposed method represents

  1. 3D Markov Process for Traffic Flow Prediction in Real-Time.

    PubMed

    Ko, Eunjeong; Ahn, Jinyoung; Kim, Eun Yi

    2016-01-25

    Recently, the correct estimation of traffic flow has begun to be considered an essential component in intelligent transportation systems. In this paper, a new statistical method to predict traffic flows using time series analyses and geometric correlations is proposed. The novelty of the proposed method is two-fold: (1) a 3D heat map is designed to describe the traffic conditions between roads, which can effectively represent the correlations between spatially- and temporally-adjacent traffic states; and (2) the relationship between the adjacent roads on the spatiotemporal domain is represented by cliques in MRF and the clique parameters are obtained by example-based learning. In order to assess the validity of the proposed method, it is tested using data from expressway traffic that are provided by the Korean Expressway Corporation, and the performance of the proposed method is compared with existing approaches. The results demonstrate that the proposed method can predict traffic conditions with an accuracy of 85%, and this accuracy can be improved further.

  2. 3D Markov Process for Traffic Flow Prediction in Real-Time

    PubMed Central

    Ko, Eunjeong; Ahn, Jinyoung; Kim, Eun Yi

    2016-01-01

    Recently, the correct estimation of traffic flow has begun to be considered an essential component in intelligent transportation systems. In this paper, a new statistical method to predict traffic flows using time series analyses and geometric correlations is proposed. The novelty of the proposed method is two-fold: (1) a 3D heat map is designed to describe the traffic conditions between roads, which can effectively represent the correlations between spatially- and temporally-adjacent traffic states; and (2) the relationship between the adjacent roads on the spatiotemporal domain is represented by cliques in MRF and the clique parameters are obtained by example-based learning. In order to assess the validity of the proposed method, it is tested using data from expressway traffic that are provided by the Korean Expressway Corporation, and the performance of the proposed method is compared with existing approaches. The results demonstrate that the proposed method can predict traffic conditions with an accuracy of 85%, and this accuracy can be improved further. PMID:26821025

  3. A unified tensor level set for image segmentation.

    PubMed

    Wang, Bin; Gao, Xinbo; Tao, Dacheng; Li, Xuelong

    2010-06-01

    This paper presents a new region-based unified tensor level set model for image segmentation. This model introduces a three-order tensor to comprehensively depict features of pixels, e.g., gray value and the local geometrical features, such as orientation and gradient, and then, by defining a weighted distance, we generalized the representative region-based level set method from scalar to tensor. The proposed model has four main advantages compared with the traditional representative method as follows. First, involving the Gaussian filter bank, the model is robust against noise, particularly the salt- and pepper-type noise. Second, considering the local geometrical features, e.g., orientation and gradient, the model pays more attention to boundaries and makes the evolving curve stop more easily at the boundary location. Third, due to the unified tensor pixel representation representing the pixels, the model segments images more accurately and naturally. Fourth, based on a weighted distance definition, the model possesses the capacity to cope with data varying from scalar to vector, then to high-order tensor. We apply the proposed method to synthetic, medical, and natural images, and the result suggests that the proposed method is superior to the available representative region-based level set method.

  4. Hesitant Fuzzy Linguistic Preference Utility Set and Its Application in Selection of Fire Rescue Plans

    PubMed Central

    Si, Guangsen; Xu, Zeshui

    2018-01-01

    Hesitant fuzzy linguistic term set provides an effective tool to represent uncertain decision information. However, the semantics corresponding to the linguistic terms in it cannot accurately reflect the decision-makers’ subjective cognition. In general, different decision-makers’ sensitivities towards the semantics are different. Such sensitivities can be represented by the cumulative prospect theory value function. Inspired by this, we propose a linguistic scale function to transform the semantics corresponding to linguistic terms into the linguistic preference values. Furthermore, we propose the hesitant fuzzy linguistic preference utility set, based on which, the decision-makers can flexibly express their distinct semantics and obtain the decision results that are consistent with their cognition. For calculations and comparisons over the hesitant fuzzy linguistic preference utility sets, we introduce some distance measures and comparison laws. Afterwards, to apply the hesitant fuzzy linguistic preference utility sets in emergency management, we develop a method to obtain objective weights of attributes and then propose a hesitant fuzzy linguistic preference utility-TOPSIS method to select the best fire rescue plan. Finally, the validity of the proposed method is verified by some comparisons of the method with other two representative methods including the hesitant fuzzy linguistic-TOPSIS method and the hesitant fuzzy linguistic-VIKOR method. PMID:29614019

  5. Hesitant Fuzzy Linguistic Preference Utility Set and Its Application in Selection of Fire Rescue Plans.

    PubMed

    Liao, Huchang; Si, Guangsen; Xu, Zeshui; Fujita, Hamido

    2018-04-03

    Hesitant fuzzy linguistic term set provides an effective tool to represent uncertain decision information. However, the semantics corresponding to the linguistic terms in it cannot accurately reflect the decision-makers' subjective cognition. In general, different decision-makers' sensitivities towards the semantics are different. Such sensitivities can be represented by the cumulative prospect theory value function. Inspired by this, we propose a linguistic scale function to transform the semantics corresponding to linguistic terms into the linguistic preference values. Furthermore, we propose the hesitant fuzzy linguistic preference utility set, based on which, the decision-makers can flexibly express their distinct semantics and obtain the decision results that are consistent with their cognition. For calculations and comparisons over the hesitant fuzzy linguistic preference utility sets, we introduce some distance measures and comparison laws. Afterwards, to apply the hesitant fuzzy linguistic preference utility sets in emergency management, we develop a method to obtain objective weights of attributes and then propose a hesitant fuzzy linguistic preference utility-TOPSIS method to select the best fire rescue plan. Finally, the validity of the proposed method is verified by some comparisons of the method with other two representative methods including the hesitant fuzzy linguistic-TOPSIS method and the hesitant fuzzy linguistic-VIKOR method.

  6. Rotation invariant deep binary hashing for fast image retrieval

    NASA Astrophysics Data System (ADS)

    Dai, Lai; Liu, Jianming; Jiang, Aiwen

    2017-07-01

    In this paper, we study how to compactly represent image's characteristics for fast image retrieval. We propose supervised rotation invariant compact discriminative binary descriptors through combining convolutional neural network with hashing. In the proposed network, binary codes are learned by employing a hidden layer for representing latent concepts that dominate on class labels. A loss function is proposed to minimize the difference between binary descriptors that describe reference image and the rotated one. Compared with some other supervised methods, the proposed network doesn't have to require pair-wised inputs for binary code learning. Experimental results show that our method is effective and achieves state-of-the-art results on the CIFAR-10 and MNIST datasets.

  7. CREPT-MCNP code for efficiency calibration of HPGe detectors with the representative point method.

    PubMed

    Saegusa, Jun

    2008-01-01

    The representative point method for the efficiency calibration of volume samples has been previously proposed. For smoothly implementing the method, a calculation code named CREPT-MCNP has been developed. The code estimates the position of a representative point which is intrinsic to each shape of volume sample. The self-absorption correction factors are also given to make correction on the efficiencies measured at the representative point with a standard point source. Features of the CREPT-MCNP code are presented.

  8. Proposal of Constraints Analysis Method Based on Network Model for Task Planning

    NASA Astrophysics Data System (ADS)

    Tomiyama, Tomoe; Sato, Tatsuhiro; Morita, Toyohisa; Sasaki, Toshiro

    Deregulation has been accelerating several activities toward reengineering business processes, such as railway through service and modal shift in logistics. Making those activities successful, business entities have to regulate new business rules or know-how (we call them ‘constraints’). According to the new constraints, they need to manage business resources such as instruments, materials, workers and so on. In this paper, we propose a constraint analysis method to define constraints for task planning of the new business processes. To visualize each constraint's influence on planning, we propose a network model which represents allocation relations between tasks and resources. The network can also represent task ordering relations and resource grouping relations. The proposed method formalizes the way of defining constraints manually as repeatedly checking the network structure and finding conflicts between constraints. Being applied to crew scheduling problems shows that the method can adequately represent and define constraints of some task planning problems with the following fundamental features, (1) specifying work pattern to some resources, (2) restricting the number of resources for some works, (3) requiring multiple resources for some works, (4) prior allocation of some resources to some works and (5) considering the workload balance between resources.

  9. An approach to define semantics for BPM systems interoperability

    NASA Astrophysics Data System (ADS)

    Rico, Mariela; Caliusco, María Laura; Chiotti, Omar; Rosa Galli, María

    2015-04-01

    This article proposes defining semantics for Business Process Management systems interoperability through the ontology of Electronic Business Documents (EBD) used to interchange the information required to perform cross-organizational processes. The semantic model generated allows aligning enterprise's business processes to support cross-organizational processes by matching the business ontology of each business partner with the EBD ontology. The result is a flexible software architecture that allows dynamically defining cross-organizational business processes by reusing the EBD ontology. For developing the semantic model, a method is presented, which is based on a strategy for discovering entity features whose interpretation depends on the context, and representing them for enriching the ontology. The proposed method complements ontology learning techniques that can not infer semantic features not represented in data sources. In order to improve the representation of these entity features, the method proposes using widely accepted ontologies, for representing time entities and relations, physical quantities, measurement units, official country names, and currencies and funds, among others. When the ontologies reuse is not possible, the method proposes identifying whether that feature is simple or complex, and defines a strategy to be followed. An empirical validation of the approach has been performed through a case study.

  10. Stacked Multilayer Self-Organizing Map for Background Modeling.

    PubMed

    Zhao, Zhenjie; Zhang, Xuebo; Fang, Yongchun

    2015-09-01

    In this paper, a new background modeling method called stacked multilayer self-organizing map background model (SMSOM-BM) is proposed, which presents several merits such as strong representative ability for complex scenarios, easy to use, and so on. In order to enhance the representative ability of the background model and make the parameters learned automatically, the recently developed idea of representative learning (or deep learning) is elegantly employed to extend the existing single-layer self-organizing map background model to a multilayer one (namely, the proposed SMSOM-BM). As a consequence, the SMSOM-BM gains several merits including strong representative ability to learn background model of challenging scenarios, and automatic determination for most network parameters. More specifically, every pixel is modeled by a SMSOM, and spatial consistency is considered at each layer. By introducing a novel over-layer filtering process, we can train the background model layer by layer in an efficient manner. Furthermore, for real-time performance consideration, we have implemented the proposed method using NVIDIA CUDA platform. Comparative experimental results show superior performance of the proposed approach.

  11. Coordinates and intervals in graph-based reference genomes.

    PubMed

    Rand, Knut D; Grytten, Ivar; Nederbragt, Alexander J; Storvik, Geir O; Glad, Ingrid K; Sandve, Geir K

    2017-05-18

    It has been proposed that future reference genomes should be graph structures in order to better represent the sequence diversity present in a species. However, there is currently no standard method to represent genomic intervals, such as the positions of genes or transcription factor binding sites, on graph-based reference genomes. We formalize offset-based coordinate systems on graph-based reference genomes and introduce methods for representing intervals on these reference structures. We show the advantage of our methods by representing genes on a graph-based representation of the newest assembly of the human genome (GRCh38) and its alternative loci for regions that are highly variable. More complex reference genomes, containing alternative loci, require methods to represent genomic data on these structures. Our proposed notation for genomic intervals makes it possible to fully utilize the alternative loci of the GRCh38 assembly and potential future graph-based reference genomes. We have made a Python package for representing such intervals on offset-based coordinate systems, available at https://github.com/uio-cels/offsetbasedgraph . An interactive web-tool using this Python package to visualize genes on a graph created from GRCh38 is available at https://github.com/uio-cels/genomicgraphcoords .

  12. Appearance-based representative samples refining method for palmprint recognition

    NASA Astrophysics Data System (ADS)

    Wen, Jiajun; Chen, Yan

    2012-07-01

    The sparse representation can deal with the lack of sample problem due to utilizing of all the training samples. However, the discrimination ability will degrade when more training samples are used for representation. We propose a novel appearance-based palmprint recognition method. We aim to find a compromise between the discrimination ability and the lack of sample problem so as to obtain a proper representation scheme. Under the assumption that the test sample can be well represented by a linear combination of a certain number of training samples, we first select the representative training samples according to the contributions of the samples. Then we further refine the training samples by an iteration procedure, excluding the training sample with the least contribution to the test sample for each time. Experiments on PolyU multispectral palmprint database and two-dimensional and three-dimensional palmprint database show that the proposed method outperforms the conventional appearance-based palmprint recognition methods. Moreover, we also explore and find out the principle of the usage for the key parameters in the proposed algorithm, which facilitates to obtain high-recognition accuracy.

  13. An efficient reliability algorithm for locating design point using the combination of importance sampling concepts and response surface method

    NASA Astrophysics Data System (ADS)

    Shayanfar, Mohsen Ali; Barkhordari, Mohammad Ali; Roudak, Mohammad Amin

    2017-06-01

    Monte Carlo simulation (MCS) is a useful tool for computation of probability of failure in reliability analysis. However, the large number of required random samples makes it time-consuming. Response surface method (RSM) is another common method in reliability analysis. Although RSM is widely used for its simplicity, it cannot be trusted in highly nonlinear problems due to its linear nature. In this paper, a new efficient algorithm, employing the combination of importance sampling, as a class of MCS, and RSM is proposed. In the proposed algorithm, analysis starts with importance sampling concepts and using a represented two-step updating rule of design point. This part finishes after a small number of samples are generated. Then RSM starts to work using Bucher experimental design, with the last design point and a represented effective length as the center point and radius of Bucher's approach, respectively. Through illustrative numerical examples, simplicity and efficiency of the proposed algorithm and the effectiveness of the represented rules are shown.

  14. Representing Lumped Markov Chains by Minimal Polynomials over Field GF(q)

    NASA Astrophysics Data System (ADS)

    Zakharov, V. M.; Shalagin, S. V.; Eminov, B. F.

    2018-05-01

    A method has been proposed to represent lumped Markov chains by minimal polynomials over a finite field. The accuracy of representing lumped stochastic matrices, the law of lumped Markov chains depends linearly on the minimum degree of polynomials over field GF(q). The method allows constructing the realizations of lumped Markov chains on linear shift registers with a pre-defined “linear complexity”.

  15. Construction Method of Display Proposal for Commodities in Sales Promotion by Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Yumoto, Masaki

    In a sales promotion task, wholesaler prepares and presents the display proposal for commodities in order to negotiate with retailer's buyers what commodities they should sell. For automating the sales promotion tasks, the proposal has to be constructed according to the target retailer's buyer. However, it is difficult to construct the proposal suitable for the target retail store because of too much combination of commodities. This paper proposes a construction method by Genetic algorithm (GA). The proposed method represents initial display proposals for commodities with genes, improve ones with the evaluation value by GA, and rearrange one with the highest evaluation value according to the classification of commodity. Through practical experiment, we can confirm that display proposal by the proposed method is similar with the one constructed by a wholesaler.

  16. Intelligent Automatic Classification of True and Counterfeit Notes Based on Spectrum Analysis

    NASA Astrophysics Data System (ADS)

    Matsunaga, Shohei; Omatu, Sigeru; Kosaka, Toshohisa

    The purpose of this paper is to classify bank notes into “true” or “counterfeit” ones faster and more precisely compared with a conventional method. We note that thin lines are represented by direct lines in the images of true notes while they are represented in the counterfeit notes by dotted lines. This is due to properties of dot printers or scanner levels. To use the properties, we propose two method to classify a note into true or counterfeited one by checking whether there exist thin lines or dotted lines of the note. First, we use Fourier transform of the note to find quantity of features for classification and we classify a note into true or counterfeit one by using the features by Fourier transform. Then we propose a classification method by using wavelet transform in place of Fourier transform. Finally, some classification results are illustrated to show the effectiveness of the proposed methods.

  17. A new hierarchical method to find community structure in networks

    NASA Astrophysics Data System (ADS)

    Saoud, Bilal; Moussaoui, Abdelouahab

    2018-04-01

    Community structure is very important to understand a network which represents a context. Many community detection methods have been proposed like hierarchical methods. In our study, we propose a new hierarchical method for community detection in networks based on genetic algorithm. In this method we use genetic algorithm to split a network into two networks which maximize the modularity. Each new network represents a cluster (community). Then we repeat the splitting process until we get one node at each cluster. We use the modularity function to measure the strength of the community structure found by our method, which gives us an objective metric for choosing the number of communities into which a network should be divided. We demonstrate that our method are highly effective at discovering community structure in both computer-generated and real-world network data.

  18. Generalization of the Engineering Method to the UNIVERSAL METHOD.

    ERIC Educational Resources Information Center

    Koen, Billy Vaughn

    1987-01-01

    Proposes that there is a universal method for all realms of knowledge. Reviews Descartes's definition of the universal method, the engineering definition, and the philosophical basis for the universal method. Contends that the engineering method best represents the universal method. (ML)

  19. A comparative study of different aspects of manipulating ratio spectra applied for ternary mixtures: Derivative spectrophotometry versus wavelet transform

    NASA Astrophysics Data System (ADS)

    Salem, Hesham; Lotfy, Hayam M.; Hassan, Nagiba Y.; El-Zeiny, Mohamed B.; Saleh, Sarah S.

    2015-01-01

    This work represents a comparative study of different aspects of manipulating ratio spectra, which are: double divisor ratio spectra derivative (DR-DD), area under curve of derivative ratio (DR-AUC) and its novel approach, namely area under the curve correction method (AUCCM) applied for overlapped spectra; successive derivative of ratio spectra (SDR) and continuous wavelet transform (CWT) methods. The proposed methods represent different aspects of manipulating ratio spectra of the ternary mixture of Ofloxacin (OFX), Prednisolone acetate (PA) and Tetryzoline HCl (TZH) combined in eye drops in the presence of benzalkonium chloride as a preservative. The proposed methods were checked using laboratory-prepared mixtures and were successfully applied for the analysis of pharmaceutical formulation containing the cited drugs. The proposed methods were validated according to the ICH guidelines. A comparative study was conducted between those methods regarding simplicity, limitation and sensitivity. The obtained results were statistically compared with those obtained from the reported HPLC method, showing no significant difference with respect to accuracy and precision.

  20. A comparative study of different aspects of manipulating ratio spectra applied for ternary mixtures: derivative spectrophotometry versus wavelet transform.

    PubMed

    Salem, Hesham; Lotfy, Hayam M; Hassan, Nagiba Y; El-Zeiny, Mohamed B; Saleh, Sarah S

    2015-01-25

    This work represents a comparative study of different aspects of manipulating ratio spectra, which are: double divisor ratio spectra derivative (DR-DD), area under curve of derivative ratio (DR-AUC) and its novel approach, namely area under the curve correction method (AUCCM) applied for overlapped spectra; successive derivative of ratio spectra (SDR) and continuous wavelet transform (CWT) methods. The proposed methods represent different aspects of manipulating ratio spectra of the ternary mixture of Ofloxacin (OFX), Prednisolone acetate (PA) and Tetryzoline HCl (TZH) combined in eye drops in the presence of benzalkonium chloride as a preservative. The proposed methods were checked using laboratory-prepared mixtures and were successfully applied for the analysis of pharmaceutical formulation containing the cited drugs. The proposed methods were validated according to the ICH guidelines. A comparative study was conducted between those methods regarding simplicity, limitation and sensitivity. The obtained results were statistically compared with those obtained from the reported HPLC method, showing no significant difference with respect to accuracy and precision. Copyright © 2014 Elsevier B.V. All rights reserved.

  1. Melancholic depression prediction by identifying representative features in metabolic and microarray profiles with missing values.

    PubMed

    Nie, Zhi; Yang, Tao; Liu, Yashu; Li, Qingyang; Narayan, Vaibhav A; Wittenberg, Gayle; Ye, Jieping

    2015-01-01

    Recent studies have revealed that melancholic depression, one major subtype of depression, is closely associated with the concentration of some metabolites and biological functions of certain genes and pathways. Meanwhile, recent advances in biotechnologies have allowed us to collect a large amount of genomic data, e.g., metabolites and microarray gene expression. With such a huge amount of information available, one approach that can give us new insights into the understanding of the fundamental biology underlying melancholic depression is to build disease status prediction models using classification or regression methods. However, the existence of strong empirical correlations, e.g., those exhibited by genes sharing the same biological pathway in microarray profiles, tremendously limits the performance of these methods. Furthermore, the occurrence of missing values which are ubiquitous in biomedical applications further complicates the problem. In this paper, we hypothesize that the problem of missing values might in some way benefit from the correlation between the variables and propose a method to learn a compressed set of representative features through an adapted version of sparse coding which is capable of identifying correlated variables and addressing the issue of missing values simultaneously. An efficient algorithm is also developed to solve the proposed formulation. We apply the proposed method on metabolic and microarray profiles collected from a group of subjects consisting of both patients with melancholic depression and healthy controls. Results show that the proposed method can not only produce meaningful clusters of variables but also generate a set of representative features that achieve superior classification performance over those generated by traditional clustering and data imputation techniques. In particular, on both datasets, we found that in comparison with the competing algorithms, the representative features learned by the proposed method give rise to significantly improved sensitivity scores, suggesting that the learned features allow prediction with high accuracy of disease status in those who are diagnosed with melancholic depression. To our best knowledge, this is the first work that applies sparse coding to deal with high feature correlations and missing values, which are common challenges in many biomedical applications. The proposed method can be readily adapted to other biomedical applications involving incomplete and high-dimensional data.

  2. Using SVD on Clusters to Improve Precision of Interdocument Similarity Measure.

    PubMed

    Zhang, Wen; Xiao, Fan; Li, Bin; Zhang, Siguang

    2016-01-01

    Recently, LSI (Latent Semantic Indexing) based on SVD (Singular Value Decomposition) is proposed to overcome the problems of polysemy and homonym in traditional lexical matching. However, it is usually criticized as with low discriminative power for representing documents although it has been validated as with good representative quality. In this paper, SVD on clusters is proposed to improve the discriminative power of LSI. The contribution of this paper is three manifolds. Firstly, we make a survey of existing linear algebra methods for LSI, including both SVD based methods and non-SVD based methods. Secondly, we propose SVD on clusters for LSI and theoretically explain that dimension expansion of document vectors and dimension projection using SVD are the two manipulations involved in SVD on clusters. Moreover, we develop updating processes to fold in new documents and terms in a decomposed matrix by SVD on clusters. Thirdly, two corpora, a Chinese corpus and an English corpus, are used to evaluate the performances of the proposed methods. Experiments demonstrate that, to some extent, SVD on clusters can improve the precision of interdocument similarity measure in comparison with other SVD based LSI methods.

  3. Using SVD on Clusters to Improve Precision of Interdocument Similarity Measure

    PubMed Central

    Xiao, Fan; Li, Bin; Zhang, Siguang

    2016-01-01

    Recently, LSI (Latent Semantic Indexing) based on SVD (Singular Value Decomposition) is proposed to overcome the problems of polysemy and homonym in traditional lexical matching. However, it is usually criticized as with low discriminative power for representing documents although it has been validated as with good representative quality. In this paper, SVD on clusters is proposed to improve the discriminative power of LSI. The contribution of this paper is three manifolds. Firstly, we make a survey of existing linear algebra methods for LSI, including both SVD based methods and non-SVD based methods. Secondly, we propose SVD on clusters for LSI and theoretically explain that dimension expansion of document vectors and dimension projection using SVD are the two manipulations involved in SVD on clusters. Moreover, we develop updating processes to fold in new documents and terms in a decomposed matrix by SVD on clusters. Thirdly, two corpora, a Chinese corpus and an English corpus, are used to evaluate the performances of the proposed methods. Experiments demonstrate that, to some extent, SVD on clusters can improve the precision of interdocument similarity measure in comparison with other SVD based LSI methods. PMID:27579031

  4. Cross-Domain Multi-View Object Retrieval via Multi-Scale Topic Models.

    PubMed

    Hong, Richang; Hu, Zhenzhen; Wang, Ruxin; Wang, Meng; Tao, Dacheng

    2016-09-27

    The increasing number of 3D objects in various applications has increased the requirement for effective and efficient 3D object retrieval methods, which attracted extensive research efforts in recent years. Existing works mainly focus on how to extract features and conduct object matching. With the increasing applications, 3D objects come from different areas. In such circumstances, how to conduct object retrieval becomes more important. To address this issue, we propose a multi-view object retrieval method using multi-scale topic models in this paper. In our method, multiple views are first extracted from each object, and then the dense visual features are extracted to represent each view. To represent the 3D object, multi-scale topic models are employed to extract the hidden relationship among these features with respected to varied topic numbers in the topic model. In this way, each object can be represented by a set of bag of topics. To compare the objects, we first conduct topic clustering for the basic topics from two datasets, and then generate the common topic dictionary for new representation. Then, the two objects can be aligned to the same common feature space for comparison. To evaluate the performance of the proposed method, experiments are conducted on two datasets. The 3D object retrieval experimental results and comparison with existing methods demonstrate the effectiveness of the proposed method.

  5. Improving performances of suboptimal greedy iterative biclustering heuristics via localization.

    PubMed

    Erten, Cesim; Sözdinler, Melih

    2010-10-15

    Biclustering gene expression data is the problem of extracting submatrices of genes and conditions exhibiting significant correlation across both the rows and the columns of a data matrix of expression values. Even the simplest versions of the problem are computationally hard. Most of the proposed solutions therefore employ greedy iterative heuristics that locally optimize a suitably assigned scoring function. We provide a fast and simple pre-processing algorithm called localization that reorders the rows and columns of the input data matrix in such a way as to group correlated entries in small local neighborhoods within the matrix. The proposed localization algorithm takes its roots from effective use of graph-theoretical methods applied to problems exhibiting a similar structure to that of biclustering. In order to evaluate the effectivenesss of the localization pre-processing algorithm, we focus on three representative greedy iterative heuristic methods. We show how the localization pre-processing can be incorporated into each representative algorithm to improve biclustering performance. Furthermore, we propose a simple biclustering algorithm, Random Extraction After Localization (REAL) that randomly extracts submatrices from the localization pre-processed data matrix, eliminates those with low similarity scores, and provides the rest as correlated structures representing biclusters. We compare the proposed localization pre-processing with another pre-processing alternative, non-negative matrix factorization. We show that our fast and simple localization procedure provides similar or even better results than the computationally heavy matrix factorization pre-processing with regards to H-value tests. We next demonstrate that the performances of the three representative greedy iterative heuristic methods improve with localization pre-processing when biological correlations in the form of functional enrichment and PPI verification constitute the main performance criteria. The fact that the random extraction method based on localization REAL performs better than the representative greedy heuristic methods under same criteria also confirms the effectiveness of the suggested pre-processing method. Supplementary material including code implementations in LEDA C++ library, experimental data, and the results are available at http://code.google.com/p/biclustering/ cesim@khas.edu.tr; melihsozdinler@boun.edu.tr Supplementary data are available at Bioinformatics online.

  6. Analysing the Image Building Effects of TV Advertisements Using Internet Community Data

    NASA Astrophysics Data System (ADS)

    Uehara, Hiroshi; Sato, Tadahiko; Yoshida, Kenichi

    This paper proposes a method to measure the effects of TV advertisements on the Internet bulletin boards. It aims to clarify how the viewes' interests on TV advertisements are reflected on their images on the promoted products. Two kinds of time series data are generated based on the proposed method. First one represents the time series fluctuation of the interests on the TV advertisements. Another one represents the time series fluctuation of the images on the products. By analysing the correlations between these two time series data, we try to clarify the implicit relationship between the viewer's interests on the TV advertisement and their images on the promoted products. By applying the proposed method to an Internet bulletin board that deals with certain cosmetic brand, we show that the images on the products vary depending on the difference of the interests on each TV advertisement.

  7. Induced subgraph searching for geometric model fitting

    NASA Astrophysics Data System (ADS)

    Xiao, Fan; Xiao, Guobao; Yan, Yan; Wang, Xing; Wang, Hanzi

    2017-11-01

    In this paper, we propose a novel model fitting method based on graphs to fit and segment multiple-structure data. In the graph constructed on data, each model instance is represented as an induced subgraph. Following the idea of pursuing the maximum consensus, the multiple geometric model fitting problem is formulated as searching for a set of induced subgraphs including the maximum union set of vertices. After the generation and refinement of the induced subgraphs that represent the model hypotheses, the searching process is conducted on the "qualified" subgraphs. Multiple model instances can be simultaneously estimated by solving a converted problem. Then, we introduce the energy evaluation function to determine the number of model instances in data. The proposed method is able to effectively estimate the number and the parameters of model instances in data severely corrupted by outliers and noises. Experimental results on synthetic data and real images validate the favorable performance of the proposed method compared with several state-of-the-art fitting methods.

  8. Skeleton-based human action recognition using multiple sequence alignment

    NASA Astrophysics Data System (ADS)

    Ding, Wenwen; Liu, Kai; Cheng, Fei; Zhang, Jin; Li, YunSong

    2015-05-01

    Human action recognition and analysis is an active research topic in computer vision for many years. This paper presents a method to represent human actions based on trajectories consisting of 3D joint positions. This method first decompose action into a sequence of meaningful atomic actions (actionlets), and then label actionlets with English alphabets according to the Davies-Bouldin index value. Therefore, an action can be represented using a sequence of actionlet symbols, which will preserve the temporal order of occurrence of each of the actionlets. Finally, we employ sequence comparison to classify multiple actions through using string matching algorithms (Needleman-Wunsch). The effectiveness of the proposed method is evaluated on datasets captured by commodity depth cameras. Experiments of the proposed method on three challenging 3D action datasets show promising results.

  9. Advanced soft computing diagnosis method for tumour grading.

    PubMed

    Papageorgiou, E I; Spyridonos, P P; Stylios, C D; Ravazoula, P; Groumpos, P P; Nikiforidis, G N

    2006-01-01

    To develop an advanced diagnostic method for urinary bladder tumour grading. A novel soft computing modelling methodology based on the augmentation of fuzzy cognitive maps (FCMs) with the unsupervised active Hebbian learning (AHL) algorithm is applied. One hundred and twenty-eight cases of urinary bladder cancer were retrieved from the archives of the Department of Histopathology, University Hospital of Patras, Greece. All tumours had been characterized according to the classical World Health Organization (WHO) grading system. To design the FCM model for tumour grading, three experts histopathologists defined the main histopathological features (concepts) and their impact on grade characterization. The resulted FCM model consisted of nine concepts. Eight concepts represented the main histopathological features for tumour grading. The ninth concept represented the tumour grade. To increase the classification ability of the FCM model, the AHL algorithm was applied to adjust the weights of the FCM. The proposed FCM grading model achieved a classification accuracy of 72.5%, 74.42% and 95.55% for tumours of grades I, II and III, respectively. An advanced computerized method to support tumour grade diagnosis decision was proposed and developed. The novelty of the method is based on employing the soft computing method of FCMs to represent specialized knowledge on histopathology and on augmenting FCMs ability using an unsupervised learning algorithm, the AHL. The proposed method performs with reasonably high accuracy compared to other existing methods and at the same time meets the physicians' requirements for transparency and explicability.

  10. Latin hypercube approach to estimate uncertainty in ground water vulnerability

    USGS Publications Warehouse

    Gurdak, J.J.; McCray, J.E.; Thyne, G.; Qi, S.L.

    2007-01-01

    A methodology is proposed to quantify prediction uncertainty associated with ground water vulnerability models that were developed through an approach that coupled multivariate logistic regression with a geographic information system (GIS). This method uses Latin hypercube sampling (LHS) to illustrate the propagation of input error and estimate uncertainty associated with the logistic regression predictions of ground water vulnerability. Central to the proposed method is the assumption that prediction uncertainty in ground water vulnerability models is a function of input error propagation from uncertainty in the estimated logistic regression model coefficients (model error) and the values of explanatory variables represented in the GIS (data error). Input probability distributions that represent both model and data error sources of uncertainty were simultaneously sampled using a Latin hypercube approach with logistic regression calculations of probability of elevated nonpoint source contaminants in ground water. The resulting probability distribution represents the prediction intervals and associated uncertainty of the ground water vulnerability predictions. The method is illustrated through a ground water vulnerability assessment of the High Plains regional aquifer. Results of the LHS simulations reveal significant prediction uncertainties that vary spatially across the regional aquifer. Additionally, the proposed method enables a spatial deconstruction of the prediction uncertainty that can lead to improved prediction of ground water vulnerability. ?? 2007 National Ground Water Association.

  11. Oil Spill Detection in Terma-Side-Looking Airborne Radar Images Using Image Features and Region Segmentation

    PubMed Central

    Alacid, Beatriz

    2018-01-01

    This work presents a method for oil-spill detection on Spanish coasts using aerial Side-Looking Airborne Radar (SLAR) images, which are captured using a Terma sensor. The proposed method uses grayscale image processing techniques to identify the dark spots that represent oil slicks on the sea. The approach is based on two steps. First, the noise regions caused by aircraft movements are detected and labeled in order to avoid the detection of false-positives. Second, a segmentation process guided by a map saliency technique is used to detect image regions that represent oil slicks. The results show that the proposed method is an improvement on the previous approaches for this task when employing SLAR images. PMID:29316716

  12. Tensor-based Dictionary Learning for Spectral CT Reconstruction

    PubMed Central

    Zhang, Yanbo; Wang, Ge

    2016-01-01

    Spectral computed tomography (CT) produces an energy-discriminative attenuation map of an object, extending a conventional image volume with a spectral dimension. In spectral CT, an image can be sparsely represented in each of multiple energy channels, and are highly correlated among energy channels. According to this characteristics, we propose a tensor-based dictionary learning method for spectral CT reconstruction. In our method, tensor patches are extracted from an image tensor, which is reconstructed using the filtered backprojection (FBP), to form a training dataset. With the Candecomp/Parafac decomposition, a tensor-based dictionary is trained, in which each atom is a rank-one tensor. Then, the trained dictionary is used to sparsely represent image tensor patches during an iterative reconstruction process, and the alternating minimization scheme is adapted for optimization. The effectiveness of our proposed method is validated with both numerically simulated and real preclinical mouse datasets. The results demonstrate that the proposed tensor-based method generally produces superior image quality, and leads to more accurate material decomposition than the currently popular popular methods. PMID:27541628

  13. SUSTAINABLE SANITATION FOR THE HÔPITAL SACRÉ COEUR IN MILOT, HAITI

    EPA Science Inventory

    The proposed design represents a completely sustainable system and a significant improvement on the current pit-based disposal method that causes groundwater contamination and health and environmental hazards during floods. The proposed latrine will capture possible contaminan...

  14. SEIPS-based process modeling in primary care.

    PubMed

    Wooldridge, Abigail R; Carayon, Pascale; Hundt, Ann Schoofs; Hoonakker, Peter L T

    2017-04-01

    Process mapping, often used as part of the human factors and systems engineering approach to improve care delivery and outcomes, should be expanded to represent the complex, interconnected sociotechnical aspects of health care. Here, we propose a new sociotechnical process modeling method to describe and evaluate processes, using the SEIPS model as the conceptual framework. The method produces a process map and supplementary table, which identify work system barriers and facilitators. In this paper, we present a case study applying this method to three primary care processes. We used purposeful sampling to select staff (care managers, providers, nurses, administrators and patient access representatives) from two clinics to observe and interview. We show the proposed method can be used to understand and analyze healthcare processes systematically and identify specific areas of improvement. Future work is needed to assess usability and usefulness of the SEIPS-based process modeling method and further refine it. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. SEIPS-Based Process Modeling in Primary Care

    PubMed Central

    Wooldridge, Abigail R.; Carayon, Pascale; Hundt, Ann Schoofs; Hoonakker, Peter

    2016-01-01

    Process mapping, often used as part of the human factors and systems engineering approach to improve care delivery and outcomes, should be expanded to represent the complex, interconnected sociotechnical aspects of health care. Here, we propose a new sociotechnical process modeling method to describe and evaluate processes, using the SEIPS model as the conceptual framework. The method produces a process map and supplementary table, which identify work system barriers and facilitators. In this paper, we present a case study applying this method to three primary care processes. We used purposeful sampling to select staff (care managers, providers, nurses, administrators and patient access representatives) from two clinics to observe and interview. We show the proposed method can be used to understand and analyze healthcare processes systematically and identify specific areas of improvement. Future work is needed to assess usability and usefulness of the SEIPS-based process modeling method and further refine it. PMID:28166883

  16. Representativeness of environmental impact assessment methods regarding Life Cycle Inventories.

    PubMed

    Esnouf, Antoine; Latrille, Éric; Steyer, Jean-Philippe; Helias, Arnaud

    2018-04-15

    Life Cycle Assessment (LCA) characterises all the exchanges between human driven activities and the environment, thus representing a powerful approach for tackling the environmental impact of a production system. However, LCA practitioners must still choose the appropriate Life Cycle Impact Assessment (LCIA) method to use and are expected to justify this choice: impacts should be relevant facing the concerns of the study and misrepresentations should be avoided. This work aids practitioners in evaluating the adequacy between the assessed environmental issues and studied production system. Based on a geometrical standpoint of LCA framework, Life Cycle Inventories (LCIs) and LCIA methods were localized in the vector space spanned by elementary flows. A proximity measurement, the Representativeness Index (RI), is proposed to explore the relationship between those datasets (LCIs and LCIA methods) through an angular distance. RIs highlight LCIA methods that measure issues for which the LCI can be particularly harmful. A high RI indicates a close proximity between a LCI and a LCIA method, and highlights a better representation of the elementary flows by the LCIA method. To illustrate the benefits of the proposed approach, representativeness of LCIA methods regarding four electricity mix production LCIs from the ecoinvent database are presented. RIs for 18 LCIA methods (accounting for a total of 232 impact categories) were calculated on these LCIs and the relevance of the methods are discussed. RIs prove to be a criterion for distinguishing the different LCIA methods and could thus be employed by practitioners for deeper interpretations of LCIA results. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Dynamic network reconstruction from gene expression data applied to immune response during bacterial infection.

    PubMed

    Guthke, Reinhard; Möller, Ulrich; Hoffmann, Martin; Thies, Frank; Töpfer, Susanne

    2005-04-15

    The immune response to bacterial infection represents a complex network of dynamic gene and protein interactions. We present an optimized reverse engineering strategy aimed at a reconstruction of this kind of interaction networks. The proposed approach is based on both microarray data and available biological knowledge. The main kinetics of the immune response were identified by fuzzy clustering of gene expression profiles (time series). The number of clusters was optimized using various evaluation criteria. For each cluster a representative gene with a high fuzzy-membership was chosen in accordance with available physiological knowledge. Then hypothetical network structures were identified by seeking systems of ordinary differential equations, whose simulated kinetics could fit the gene expression profiles of the cluster-representative genes. For the construction of hypothetical network structures singular value decomposition (SVD) based methods and a newly introduced heuristic Network Generation Method here were compared. It turned out that the proposed novel method could find sparser networks and gave better fits to the experimental data. Reinhard.Guthke@hki-jena.de.

  18. Salient object detection: manifold-based similarity adaptation approach

    NASA Astrophysics Data System (ADS)

    Zhou, Jingbo; Ren, Yongfeng; Yan, Yunyang; Gao, Shangbing

    2014-11-01

    A saliency detection algorithm based on manifold-based similarity adaptation is proposed. The proposed algorithm is divided into three steps. First, we segment an input image into superpixels, which are represented as the nodes in a graph. Second, a new similarity measurement is used in the proposed algorithm. The weight matrix of the graph, which indicates the similarities between the nodes, uses a similarity-based method. It also captures the manifold structure of the image patches, in which the graph edges are determined in a data adaptive manner in terms of both similarity and manifold structure. Then, we use local reconstruction method as a diffusion method to obtain the saliency maps. The objective function in the proposed method is based on local reconstruction, with which estimated weights capture the manifold structure. Experiments on four bench-mark databases demonstrate the accuracy and robustness of the proposed method.

  19. Time-varying singular value decomposition for periodic transient identification in bearing fault diagnosis

    NASA Astrophysics Data System (ADS)

    Zhang, Shangbin; Lu, Siliang; He, Qingbo; Kong, Fanrang

    2016-09-01

    For rotating machines, the defective faults of bearings generally are represented as periodic transient impulses in acquired signals. The extraction of transient features from signals has been a key issue for fault diagnosis. However, the background noise reduces identification performance of periodic faults in practice. This paper proposes a time-varying singular value decomposition (TSVD) method to enhance the identification of periodic faults. The proposed method is inspired by the sliding window method. By applying singular value decomposition (SVD) to the signal under a sliding window, we can obtain a time-varying singular value matrix (TSVM). Each column in the TSVM is occupied by the singular values of the corresponding sliding window, and each row represents the intrinsic structure of the raw signal, namely time-singular-value-sequence (TSVS). Theoretical and experimental analyses show that the frequency of TSVS is exactly twice that of the corresponding intrinsic structure. Moreover, the signal-to-noise ratio (SNR) of TSVS is improved significantly in comparison with the raw signal. The proposed method takes advantages of the TSVS in noise suppression and feature extraction to enhance fault frequency for diagnosis. The effectiveness of the TSVD is verified by means of simulation studies and applications to diagnosis of bearing faults. Results indicate that the proposed method is superior to traditional methods for bearing fault diagnosis.

  20. New equivalent-electrical circuit model and a practical measurement method for human body impedance.

    PubMed

    Chinen, Koyu; Kinjo, Ichiko; Zamami, Aki; Irei, Kotoyo; Nagayama, Kanako

    2015-01-01

    Human body impedance analysis is an effective tool to extract electrical information from tissues in the human body. This paper presents a new measurement method of impedance using armpit electrode and a new equivalent circuit model for the human body. The lowest impedance was measured by using an LCR meter and six electrodes including armpit electrodes. The electrical equivalent circuit model for the cell consists of resistance R and capacitance C. The R represents electrical resistance of the liquid of the inside and outside of the cell, and the C represents high frequency conductance of the cell membrane. We propose an equivalent circuit model which consists of five parallel high frequency-passing CR circuits. The proposed equivalent circuit represents alpha distribution in the impedance measured at a lower frequency range due to ion current of the outside of the cell, and beta distribution at a high frequency range due to the cell membrane and the liquid inside cell. The calculated values by using the proposed equivalent circuit model were consistent with the measured values for the human body impedance.

  1. 10 CFR 63.332 - Representative volume.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... accessible environment; (2) Its position and dimensions in the aquifer are determined using average... determined by site characterization; and (3) It contains 3,000 acre-feet of water (about 3,714,450,000 liters... dimensions of the representative volume. The DOE must propose its chosen method, and any underlying...

  2. 10 CFR 63.332 - Representative volume.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... accessible environment; (2) Its position and dimensions in the aquifer are determined using average... determined by site characterization; and (3) It contains 3,000 acre-feet of water (about 3,714,450,000 liters... dimensions of the representative volume. The DOE must propose its chosen method, and any underlying...

  3. Detecting representative data and generating synthetic samples to improve learning accuracy with imbalanced data sets.

    PubMed

    Li, Der-Chiang; Hu, Susan C; Lin, Liang-Sian; Yeh, Chun-Wu

    2017-01-01

    It is difficult for learning models to achieve high classification performances with imbalanced data sets, because with imbalanced data sets, when one of the classes is much larger than the others, most machine learning and data mining classifiers are overly influenced by the larger classes and ignore the smaller ones. As a result, the classification algorithms often have poor learning performances due to slow convergence in the smaller classes. To balance such data sets, this paper presents a strategy that involves reducing the sizes of the majority data and generating synthetic samples for the minority data. In the reducing operation, we use the box-and-whisker plot approach to exclude outliers and the Mega-Trend-Diffusion method to find representative data from the majority data. To generate the synthetic samples, we propose a counterintuitive hypothesis to find the distributed shape of the minority data, and then produce samples according to this distribution. Four real datasets were used to examine the performance of the proposed approach. We used paired t-tests to compare the Accuracy, G-mean, and F-measure scores of the proposed data pre-processing (PPDP) method merging in the D3C method (PPDP+D3C) with those of the one-sided selection (OSS), the well-known SMOTEBoost (SB) study, and the normal distribution-based oversampling (NDO) approach, and the proposed data pre-processing (PPDP) method. The results indicate that the classification performance of the proposed approach is better than that of above-mentioned methods.

  4. Strengths and weaknesses of temporal stability analysis for monitoring and estimating grid-mean soil moisture in a high-intensity irrigated agricultural landscape

    NASA Astrophysics Data System (ADS)

    Ran, Youhua; Li, Xin; Jin, Rui; Kang, Jian; Cosh, Michael H.

    2017-01-01

    Monitoring and estimating grid-mean soil moisture is very important for assessing many hydrological, biological, and biogeochemical processes and for validating remotely sensed surface soil moisture products. Temporal stability analysis (TSA) is a valuable tool for identifying a small number of representative sampling points to estimate the grid-mean soil moisture content. This analysis was evaluated and improved using high-quality surface soil moisture data that were acquired by a wireless sensor network in a high-intensity irrigated agricultural landscape in an arid region of northwestern China. The performance of the TSA was limited in areas where the representative error was dominated by random events, such as irrigation events. This shortcoming can be effectively mitigated by using a stratified TSA (STSA) method, proposed in this paper. In addition, the following methods were proposed for rapidly and efficiently identifying representative sampling points when using TSA. (1) Instantaneous measurements can be used to identify representative sampling points to some extent; however, the error resulting from this method is significant when validating remotely sensed soil moisture products. Thus, additional representative sampling points should be considered to reduce this error. (2) The calibration period can be determined from the time span of the full range of the grid-mean soil moisture content during the monitoring period. (3) The representative error is sensitive to the number of calibration sampling points, especially when only a few representative sampling points are used. Multiple sampling points are recommended to reduce data loss and improve the likelihood of representativeness at two scales.

  5. A Dictionary Learning Method with Total Generalized Variation for MRI Reconstruction

    PubMed Central

    Lu, Hongyang; Wei, Jingbo; Wang, Yuhao; Deng, Xiaohua

    2016-01-01

    Reconstructing images from their noisy and incomplete measurements is always a challenge especially for medical MR image with important details and features. This work proposes a novel dictionary learning model that integrates two sparse regularization methods: the total generalized variation (TGV) approach and adaptive dictionary learning (DL). In the proposed method, the TGV selectively regularizes different image regions at different levels to avoid oil painting artifacts largely. At the same time, the dictionary learning adaptively represents the image features sparsely and effectively recovers details of images. The proposed model is solved by variable splitting technique and the alternating direction method of multiplier. Extensive simulation experimental results demonstrate that the proposed method consistently recovers MR images efficiently and outperforms the current state-of-the-art approaches in terms of higher PSNR and lower HFEN values. PMID:27110235

  6. A Dictionary Learning Method with Total Generalized Variation for MRI Reconstruction.

    PubMed

    Lu, Hongyang; Wei, Jingbo; Liu, Qiegen; Wang, Yuhao; Deng, Xiaohua

    2016-01-01

    Reconstructing images from their noisy and incomplete measurements is always a challenge especially for medical MR image with important details and features. This work proposes a novel dictionary learning model that integrates two sparse regularization methods: the total generalized variation (TGV) approach and adaptive dictionary learning (DL). In the proposed method, the TGV selectively regularizes different image regions at different levels to avoid oil painting artifacts largely. At the same time, the dictionary learning adaptively represents the image features sparsely and effectively recovers details of images. The proposed model is solved by variable splitting technique and the alternating direction method of multiplier. Extensive simulation experimental results demonstrate that the proposed method consistently recovers MR images efficiently and outperforms the current state-of-the-art approaches in terms of higher PSNR and lower HFEN values.

  7. Analytical investigation of different mathematical approaches utilizing manipulation of ratio spectra

    NASA Astrophysics Data System (ADS)

    Osman, Essam Eldin A.

    2018-01-01

    This work represents a comparative study of different approaches of manipulating ratio spectra, applied on a binary mixture of ciprofloxacin HCl and dexamethasone sodium phosphate co-formulated as ear drops. The proposed new spectrophotometric methods are: ratio difference spectrophotometric method (RDSM), amplitude center method (ACM), first derivative of the ratio spectra (1DD) and mean centering of ratio spectra (MCR). The proposed methods were checked using laboratory-prepared mixtures and were successfully applied for the analysis of pharmaceutical formulation containing the cited drugs. The proposed methods were validated according to the ICH guidelines. A comparative study was conducted between those methods regarding simplicity, limitations and sensitivity. The obtained results were statistically compared with those obtained from the reported HPLC method, showing no significant difference with respect to accuracy and precision.

  8. A Novel Clustering Method Curbing the Number of States in Reinforcement Learning

    NASA Astrophysics Data System (ADS)

    Kotani, Naoki; Nunobiki, Masayuki; Taniguchi, Kenji

    We propose an efficient state-space construction method for a reinforcement learning. Our method controls the number of categories with improving the clustering method of Fuzzy ART which is an autonomous state-space construction method. The proposed method represents weight vector as the mean value of input vectors in order to curb the number of new categories and eliminates categories whose state values are low to curb the total number of categories. As the state value is updated, the size of category becomes small to learn policy strictly. We verified the effectiveness of the proposed method with simulations of a reaching problem for a two-link robot arm. We confirmed that the number of categories was reduced and the agent achieved the complex task quickly.

  9. EEMD-based multiscale ICA method for slewing bearing fault detection and diagnosis

    NASA Astrophysics Data System (ADS)

    Žvokelj, Matej; Zupan, Samo; Prebil, Ivan

    2016-05-01

    A novel multivariate and multiscale statistical process monitoring method is proposed with the aim of detecting incipient failures in large slewing bearings, where subjective influence plays a minor role. The proposed method integrates the strengths of the Independent Component Analysis (ICA) multivariate monitoring approach with the benefits of Ensemble Empirical Mode Decomposition (EEMD), which adaptively decomposes signals into different time scales and can thus cope with multiscale system dynamics. The method, which was named EEMD-based multiscale ICA (EEMD-MSICA), not only enables bearing fault detection but also offers a mechanism of multivariate signal denoising and, in combination with the Envelope Analysis (EA), a diagnostic tool. The multiscale nature of the proposed approach makes the method convenient to cope with data which emanate from bearings in complex real-world rotating machinery and frequently represent the cumulative effect of many underlying phenomena occupying different regions in the time-frequency plane. The efficiency of the proposed method was tested on simulated as well as real vibration and Acoustic Emission (AE) signals obtained through conducting an accelerated run-to-failure lifetime experiment on a purpose-built laboratory slewing bearing test stand. The ability to detect and locate the early-stage rolling-sliding contact fatigue failure of the bearing indicates that AE and vibration signals carry sufficient information on the bearing condition and that the developed EEMD-MSICA method is able to effectively extract it, thereby representing a reliable bearing fault detection and diagnosis strategy.

  10. Sparse Additive Ordinary Differential Equations for Dynamic Gene Regulatory Network Modeling.

    PubMed

    Wu, Hulin; Lu, Tao; Xue, Hongqi; Liang, Hua

    2014-04-02

    The gene regulation network (GRN) is a high-dimensional complex system, which can be represented by various mathematical or statistical models. The ordinary differential equation (ODE) model is one of the popular dynamic GRN models. High-dimensional linear ODE models have been proposed to identify GRNs, but with a limitation of the linear regulation effect assumption. In this article, we propose a sparse additive ODE (SA-ODE) model, coupled with ODE estimation methods and adaptive group LASSO techniques, to model dynamic GRNs that could flexibly deal with nonlinear regulation effects. The asymptotic properties of the proposed method are established and simulation studies are performed to validate the proposed approach. An application example for identifying the nonlinear dynamic GRN of T-cell activation is used to illustrate the usefulness of the proposed method.

  11. An interval-valued 2-tuple linguistic group decision-making model based on the Choquet integral operator

    NASA Astrophysics Data System (ADS)

    Liu, Bingsheng; Fu, Meiqing; Zhang, Shuibo; Xue, Bin; Zhou, Qi; Zhang, Shiruo

    2018-01-01

    The Choquet integral (IL) operator is an effective approach for handling interdependence among decision attributes in complex decision-making problems. However, the fuzzy measures of attributes and attribute sets required by IL are difficult to achieve directly, which limits the application of IL. This paper proposes a new method for determining fuzzy measures of attributes by extending Marichal's concept of entropy for fuzzy measure. To well represent the assessment information, interval-valued 2-tuple linguistic context is utilised to represent information. Then, we propose a Choquet integral operator in an interval-valued 2-tuple linguistic environment, which can effectively handle the correlation between attributes. In addition, we apply these methods to solve multi-attribute group decision-making problems. The feasibility and validity of the proposed operator is demonstrated by comparisons with other models in illustrative example part.

  12. A novel class sensitive hashing technique for large-scale content-based remote sensing image retrieval

    NASA Astrophysics Data System (ADS)

    Reato, Thomas; Demir, Begüm; Bruzzone, Lorenzo

    2017-10-01

    This paper presents a novel class sensitive hashing technique in the framework of large-scale content-based remote sensing (RS) image retrieval. The proposed technique aims at representing each image with multi-hash codes, each of which corresponds to a primitive (i.e., land cover class) present in the image. To this end, the proposed method consists of a three-steps algorithm. The first step is devoted to characterize each image by primitive class descriptors. These descriptors are obtained through a supervised approach, which initially extracts the image regions and their descriptors that are then associated with primitives present in the images. This step requires a set of annotated training regions to define primitive classes. A correspondence between the regions of an image and the primitive classes is built based on the probability of each primitive class to be present at each region. All the regions belonging to the specific primitive class with a probability higher than a given threshold are highly representative of that class. Thus, the average value of the descriptors of these regions is used to characterize that primitive. In the second step, the descriptors of primitive classes are transformed into multi-hash codes to represent each image. This is achieved by adapting the kernel-based supervised locality sensitive hashing method to multi-code hashing problems. The first two steps of the proposed technique, unlike the standard hashing methods, allow one to represent each image by a set of primitive class sensitive descriptors and their hash codes. Then, in the last step, the images in the archive that are very similar to a query image are retrieved based on a multi-hash-code-matching scheme. Experimental results obtained on an archive of aerial images confirm the effectiveness of the proposed technique in terms of retrieval accuracy when compared to the standard hashing methods.

  13. Efficient view based 3-D object retrieval using Hidden Markov Model

    NASA Astrophysics Data System (ADS)

    Jain, Yogendra Kumar; Singh, Roshan Kumar

    2013-12-01

    Recent research effort has been dedicated to view based 3-D object retrieval, because of highly discriminative property of 3-D object and has multi view representation. The state-of-art method is highly depending on their own camera array setting for capturing views of 3-D object and use complex Zernike descriptor, HAC for representative view selection which limit their practical application and make it inefficient for retrieval. Therefore, an efficient and effective algorithm is required for 3-D Object Retrieval. In order to move toward a general framework for efficient 3-D object retrieval which is independent of camera array setting and avoidance of representative view selection, we propose an Efficient View Based 3-D Object Retrieval (EVBOR) method using Hidden Markov Model (HMM). In this framework, each object is represented by independent set of view, which means views are captured from any direction without any camera array restriction. In this, views are clustered (including query view) to generate the view cluster, which is then used to build the query model with HMM. In our proposed method, HMM is used in twofold: in the training (i.e. HMM estimate) and in the retrieval (i.e. HMM decode). The query model is trained by using these view clusters. The EVBOR query model is worked on the basis of query model combining with HMM. The proposed approach remove statically camera array setting for view capturing and can be apply for any 3-D object database to retrieve 3-D object efficiently and effectively. Experimental results demonstrate that the proposed scheme has shown better performance than existing methods. [Figure not available: see fulltext.

  14. Infrared and visible image fusion with spectral graph wavelet transform.

    PubMed

    Yan, Xiang; Qin, Hanlin; Li, Jia; Zhou, Huixin; Zong, Jing-guo

    2015-09-01

    Infrared and visible image fusion technique is a popular topic in image analysis because it can integrate complementary information and obtain reliable and accurate description of scenes. Multiscale transform theory as a signal representation method is widely used in image fusion. In this paper, a novel infrared and visible image fusion method is proposed based on spectral graph wavelet transform (SGWT) and bilateral filter. The main novelty of this study is that SGWT is used for image fusion. On the one hand, source images are decomposed by SGWT in its transform domain. The proposed approach not only effectively preserves the details of different source images, but also excellently represents the irregular areas of the source images. On the other hand, a novel weighted average method based on bilateral filter is proposed to fuse low- and high-frequency subbands by taking advantage of spatial consistency of natural images. Experimental results demonstrate that the proposed method outperforms seven recently proposed image fusion methods in terms of both visual effect and objective evaluation metrics.

  15. Improved remote gaze estimation using corneal reflection-adaptive geometric transforms

    NASA Astrophysics Data System (ADS)

    Ma, Chunfei; Baek, Seung-Jin; Choi, Kang-A.; Ko, Sung-Jea

    2014-05-01

    Recently, the remote gaze estimation (RGE) technique has been widely applied to consumer devices as a more natural interface. In general, the conventional RGE method estimates a user's point of gaze using a geometric transform, which represents the relationship between several infrared (IR) light sources and their corresponding corneal reflections (CRs) in the eye image. Among various methods, the homography normalization (HN) method achieves state-of-the-art performance. However, the geometric transform of the HN method requiring four CRs is infeasible for the case when fewer than four CRs are available. To solve this problem, this paper proposes a new RGE method based on three alternative geometric transforms, which are adaptive to the number of CRs. Unlike the HN method, the proposed method not only can operate with two or three CRs, but can also provide superior accuracy. To further enhance the performance, an effective error correction method is also proposed. By combining the introduced transforms with the error-correction method, the proposed method not only provides high accuracy and robustness for gaze estimation, but also allows for a more flexible system setup with a different number of IR light sources. Experimental results demonstrate the effectiveness of the proposed method.

  16. Proposed method to calculate FRMAC intervention levels for the assessment of radiologically contaminated food and comparison of the proposed method to the U.S. FDA's method to calculate derived intervention levels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kraus, Terrence D.; Hunt, Brian D.

    This report reviews the method recommended by the U.S. Food and Drug Administration for calculating Derived Intervention Levels (DILs) and identifies potential improvements to the DIL calculation method to support more accurate ingestion pathway analyses and protective action decisions. Further, this report proposes an alternate method for use by the Federal Emergency Radiological Assessment Center (FRMAC) to calculate FRMAC Intervention Levels (FILs). The default approach of the FRMAC during an emergency response is to use the FDA recommended methods. However, FRMAC recommends implementing the FIL method because we believe it to be more technically accurate. FRMAC will only implement themore » FIL method when approved by the FDA representative on the Federal Advisory Team for Environment, Food, and Health.« less

  17. Bridging the gap between formal and experience-based knowledge for context-aware laparoscopy.

    PubMed

    Katić, Darko; Schuck, Jürgen; Wekerle, Anna-Laura; Kenngott, Hannes; Müller-Stich, Beat Peter; Dillmann, Rüdiger; Speidel, Stefanie

    2016-06-01

    Computer assistance is increasingly common in surgery. However, the amount of information is bound to overload processing abilities of surgeons. We propose methods to recognize the current phase of a surgery for context-aware information filtering. The purpose is to select the most suitable subset of information for surgical situations which require special assistance. We combine formal knowledge, represented by an ontology, and experience-based knowledge, represented by training samples, to recognize phases. For this purpose, we have developed two different methods. Firstly, we use formal knowledge about possible phase transitions to create a composition of random forests. Secondly, we propose a method based on cultural optimization to infer formal rules from experience to recognize phases. The proposed methods are compared with a purely formal knowledge-based approach using rules and a purely experience-based one using regular random forests. The comparative evaluation on laparoscopic pancreas resections and adrenalectomies employs a consistent set of quality criteria on clean and noisy input. The rule-based approaches proved best with noisefree data. The random forest-based ones were more robust in the presence of noise. Formal and experience-based knowledge can be successfully combined for robust phase recognition.

  18. Reduction in training time of a deep learning model in detection of lesions in CT

    NASA Astrophysics Data System (ADS)

    Makkinejad, Nazanin; Tajbakhsh, Nima; Zarshenas, Amin; Khokhar, Ashfaq; Suzuki, Kenji

    2018-02-01

    Deep learning (DL) emerged as a powerful tool for object detection and classification in medical images. Building a well-performing DL model, however, requires a huge number of images for training, and it takes days to train a DL model even on a cutting edge high-performance computing platform. This study is aimed at developing a method for selecting a "small" number of representative samples from a large collection of training samples to train a DL model for the could be used to detect polyps in CT colonography (CTC), without compromising the classification performance. Our proposed method for representative sample selection (RSS) consists of a K-means clustering algorithm. For the performance evaluation, we applied the proposed method to select samples for the training of a massive training artificial neural network based DL model, to be used for the classification of polyps and non-polyps in CTC. Our results show that the proposed method reduce the training time by a factor of 15, while maintaining the classification performance equivalent to the model trained using the full training set. We compare the performance using area under the receiveroperating- characteristic curve (AUC).

  19. A Radio-Map Automatic Construction Algorithm Based on Crowdsourcing

    PubMed Central

    Yu, Ning; Xiao, Chenxian; Wu, Yinfeng; Feng, Renjian

    2016-01-01

    Traditional radio-map-based localization methods need to sample a large number of location fingerprints offline, which requires huge amount of human and material resources. To solve the high sampling cost problem, an automatic radio-map construction algorithm based on crowdsourcing is proposed. The algorithm employs the crowd-sourced information provided by a large number of users when they are walking in the buildings as the source of location fingerprint data. Through the variation characteristics of users’ smartphone sensors, the indoor anchors (doors) are identified and their locations are regarded as reference positions of the whole radio-map. The AP-Cluster method is used to cluster the crowdsourced fingerprints to acquire the representative fingerprints. According to the reference positions and the similarity between fingerprints, the representative fingerprints are linked to their corresponding physical locations and the radio-map is generated. Experimental results demonstrate that the proposed algorithm reduces the cost of fingerprint sampling and radio-map construction and guarantees the localization accuracy. The proposed method does not require users’ explicit participation, which effectively solves the resource-consumption problem when a location fingerprint database is established. PMID:27070623

  20. Efficiently modelling urban heat storage: an interface conduction scheme in an urban land surface model (aTEB v2.0)

    NASA Astrophysics Data System (ADS)

    Lipson, Mathew J.; Hart, Melissa A.; Thatcher, Marcus

    2017-03-01

    Intercomparison studies of models simulating the partitioning of energy over urban land surfaces have shown that the heat storage term is often poorly represented. In this study, two implicit discrete schemes representing heat conduction through urban materials are compared. We show that a well-established method of representing conduction systematically underestimates the magnitude of heat storage compared with exact solutions of one-dimensional heat transfer. We propose an alternative method of similar complexity that is better able to match exact solutions at typically employed resolutions. The proposed interface conduction scheme is implemented in an urban land surface model and its impact assessed over a 15-month observation period for a site in Melbourne, Australia, resulting in improved overall model performance for a variety of common material parameter choices and aerodynamic heat transfer parameterisations. The proposed scheme has the potential to benefit land surface models where computational constraints require a high level of discretisation in time and space, for example at neighbourhood/city scales, and where realistic material properties are preferred, for example in studies investigating impacts of urban planning changes.

  1. A method to reproduce alpha-particle spectra measured with semiconductor detectors.

    PubMed

    Timón, A Fernández; Vargas, M Jurado; Sánchez, A Martín

    2010-01-01

    A method is proposed to reproduce alpha-particle spectra measured with silicon detectors, combining analytical and computer simulation techniques. The procedure includes the use of the Monte Carlo method to simulate the tracks of alpha-particles within the source and in the detector entrance window. The alpha-particle spectrum is finally obtained by the convolution of this simulated distribution and the theoretical distributions representing the contributions of the alpha-particle spectrometer to the spectrum. Experimental spectra from (233)U and (241)Am sources were compared with the predictions given by the proposed procedure, showing good agreement. The proposed method can be an important aid for the analysis and deconvolution of complex alpha-particle spectra. Copyright 2009 Elsevier Ltd. All rights reserved.

  2. Macromolecule mapping of the brain using ultrashort-TE acquisition and reference-based metabolite removal.

    PubMed

    Lam, Fan; Li, Yudu; Clifford, Bryan; Liang, Zhi-Pei

    2018-05-01

    To develop a practical method for mapping macromolecule distribution in the brain using ultrashort-TE MRSI data. An FID-based chemical shift imaging acquisition without metabolite-nulling pulses was used to acquire ultrashort-TE MRSI data that capture the macromolecule signals with high signal-to-noise-ratio (SNR) efficiency. To remove the metabolite signals from the ultrashort-TE data, single voxel spectroscopy data were obtained to determine a set of high-quality metabolite reference spectra. These spectra were then incorporated into a generalized series (GS) model to represent general metabolite spatiospectral distributions. A time-segmented algorithm was developed to back-extrapolate the GS model-based metabolite distribution from truncated FIDs and remove it from the MRSI data. Numerical simulations and in vivo experiments have been performed to evaluate the proposed method. Simulation results demonstrate accurate metabolite signal extrapolation by the proposed method given a high-quality reference. For in vivo experiments, the proposed method is able to produce spatiospectral distributions of macromolecules in the brain with high SNR from data acquired in about 10 minutes. We further demonstrate that the high-dimensional macromolecule spatiospectral distribution resides in a low-dimensional subspace. This finding provides a new opportunity to use subspace models for quantification and accelerated macromolecule mapping. Robustness of the proposed method is also demonstrated using multiple data sets from the same and different subjects. The proposed method is able to obtain macromolecule distributions in the brain from ultrashort-TE acquisitions. It can also be used for acquiring training data to determine a low-dimensional subspace to represent the macromolecule signals for subspace-based MRSI. Magn Reson Med 79:2460-2469, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  3. New method for characterization of retroreflective materials

    NASA Astrophysics Data System (ADS)

    Junior, O. S.; Silva, E. S.; Barros, K. N.; Vitro, J. G.

    2018-03-01

    The present article aims to propose a new method of analyzing the properties of retroreflective materials using a goniophotometer. The aim is to establish a higher resolution test method with a wide range of viewing angles, taking into account a three-dimensional analysis of the retroreflection of the tested material. The validation was performed by collecting data from specimens collected from materials used in safety clothing and road signs. The approach showed that the results obtained by the proposed method are comparable to the results obtained by the normative protocols, representing an evolution for the metrology of these materials.

  4. Glioma grading using cell nuclei morphologic features in digital pathology images

    NASA Astrophysics Data System (ADS)

    Reza, Syed M. S.; Iftekharuddin, Khan M.

    2016-03-01

    This work proposes a computationally efficient cell nuclei morphologic feature analysis technique to characterize the brain gliomas in tissue slide images. In this work, our contributions are two-fold: 1) obtain an optimized cell nuclei segmentation method based on the pros and cons of the existing techniques in literature, 2) extract representative features by k-mean clustering of nuclei morphologic features to include area, perimeter, eccentricity, and major axis length. This clustering based representative feature extraction avoids shortcomings of extensive tile [1] [2] and nuclear score [3] based methods for brain glioma grading in pathology images. Multilayer perceptron (MLP) is used to classify extracted features into two tumor types: glioblastoma multiforme (GBM) and low grade glioma (LGG). Quantitative scores such as precision, recall, and accuracy are obtained using 66 clinical patients' images from The Cancer Genome Atlas (TCGA) [4] dataset. On an average ~94% accuracy from 10 fold crossvalidation confirms the efficacy of the proposed method.

  5. A parallel spatiotemporal saliency and discriminative online learning method for visual target tracking in aerial videos.

    PubMed

    Aghamohammadi, Amirhossein; Ang, Mei Choo; A Sundararajan, Elankovan; Weng, Ng Kok; Mogharrebi, Marzieh; Banihashem, Seyed Yashar

    2018-01-01

    Visual tracking in aerial videos is a challenging task in computer vision and remote sensing technologies due to appearance variation difficulties. Appearance variations are caused by camera and target motion, low resolution noisy images, scale changes, and pose variations. Various approaches have been proposed to deal with appearance variation difficulties in aerial videos, and amongst these methods, the spatiotemporal saliency detection approach reported promising results in the context of moving target detection. However, it is not accurate for moving target detection when visual tracking is performed under appearance variations. In this study, a visual tracking method is proposed based on spatiotemporal saliency and discriminative online learning methods to deal with appearance variations difficulties. Temporal saliency is used to represent moving target regions, and it was extracted based on the frame difference with Sauvola local adaptive thresholding algorithms. The spatial saliency is used to represent the target appearance details in candidate moving regions. SLIC superpixel segmentation, color, and moment features can be used to compute feature uniqueness and spatial compactness of saliency measurements to detect spatial saliency. It is a time consuming process, which prompted the development of a parallel algorithm to optimize and distribute the saliency detection processes that are loaded into the multi-processors. Spatiotemporal saliency is then obtained by combining the temporal and spatial saliencies to represent moving targets. Finally, a discriminative online learning algorithm was applied to generate a sample model based on spatiotemporal saliency. This sample model is then incrementally updated to detect the target in appearance variation conditions. Experiments conducted on the VIVID dataset demonstrated that the proposed visual tracking method is effective and is computationally efficient compared to state-of-the-art methods.

  6. A parallel spatiotemporal saliency and discriminative online learning method for visual target tracking in aerial videos

    PubMed Central

    2018-01-01

    Visual tracking in aerial videos is a challenging task in computer vision and remote sensing technologies due to appearance variation difficulties. Appearance variations are caused by camera and target motion, low resolution noisy images, scale changes, and pose variations. Various approaches have been proposed to deal with appearance variation difficulties in aerial videos, and amongst these methods, the spatiotemporal saliency detection approach reported promising results in the context of moving target detection. However, it is not accurate for moving target detection when visual tracking is performed under appearance variations. In this study, a visual tracking method is proposed based on spatiotemporal saliency and discriminative online learning methods to deal with appearance variations difficulties. Temporal saliency is used to represent moving target regions, and it was extracted based on the frame difference with Sauvola local adaptive thresholding algorithms. The spatial saliency is used to represent the target appearance details in candidate moving regions. SLIC superpixel segmentation, color, and moment features can be used to compute feature uniqueness and spatial compactness of saliency measurements to detect spatial saliency. It is a time consuming process, which prompted the development of a parallel algorithm to optimize and distribute the saliency detection processes that are loaded into the multi-processors. Spatiotemporal saliency is then obtained by combining the temporal and spatial saliencies to represent moving targets. Finally, a discriminative online learning algorithm was applied to generate a sample model based on spatiotemporal saliency. This sample model is then incrementally updated to detect the target in appearance variation conditions. Experiments conducted on the VIVID dataset demonstrated that the proposed visual tracking method is effective and is computationally efficient compared to state-of-the-art methods. PMID:29438421

  7. Evolutionary Local Search of Fuzzy Rules through a novel Neuro-Fuzzy encoding method.

    PubMed

    Carrascal, A; Manrique, D; Ríos, J; Rossi, C

    2003-01-01

    This paper proposes a new approach for constructing fuzzy knowledge bases using evolutionary methods. We have designed a genetic algorithm that automatically builds neuro-fuzzy architectures based on a new indirect encoding method. The neuro-fuzzy architecture represents the fuzzy knowledge base that solves a given problem; the search for this architecture takes advantage of a local search procedure that improves the chromosomes at each generation. Experiments conducted both on artificially generated and real world problems confirm the effectiveness of the proposed approach.

  8. Adaptive target binarization method based on a dual-camera system

    NASA Astrophysics Data System (ADS)

    Lei, Jing; Zhang, Ping; Xu, Jiangtao; Gao, Zhiyuan; Gao, Jing

    2018-01-01

    An adaptive target binarization method based on a dual-camera system that contains two dynamic vision sensors was proposed. First, a preprocessing procedure of denoising is introduced to remove the noise events generated by the sensors. Then, the complete edge of the target is retrieved and represented by events based on an event mosaicking method. Third, the region of the target is confirmed by an event-to-event method. Finally, a postprocessing procedure of image open and close operations of morphology methods is adopted to remove the artifacts caused by event-to-event mismatching. The proposed binarization method has been extensively tested on numerous degraded images with nonuniform illumination, low contrast, noise, or light spots and successfully compared with other well-known binarization methods. The experimental results, which are based on visual and misclassification error criteria, show that the proposed method performs well and has better robustness on the binarization of degraded images.

  9. Single image super-resolution based on convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Zou, Lamei; Luo, Ming; Yang, Weidong; Li, Peng; Jin, Liujia

    2018-03-01

    We present a deep learning method for single image super-resolution (SISR). The proposed approach learns end-to-end mapping between low-resolution (LR) images and high-resolution (HR) images. The mapping is represented as a deep convolutional neural network which inputs the LR image and outputs the HR image. Our network uses 5 convolution layers, which kernels size include 5×5, 3×3 and 1×1. In our proposed network, we use residual-learning and combine different sizes of convolution kernels at the same layer. The experiment results show that our proposed method performs better than the existing methods in reconstructing quality index and human visual effects on benchmarked images.

  10. A simple method for assessing occupational exposure via the one-way random effects model.

    PubMed

    Krishnamoorthy, K; Mathew, Thomas; Peng, Jie

    2016-11-01

    A one-way random effects model is postulated for the log-transformed shift-long personal exposure measurements, where the random effect in the model represents an effect due to the worker. Simple closed-form confidence intervals are proposed for the relevant parameters of interest using the method of variance estimates recovery (MOVER). The performance of the confidence bounds is evaluated and compared with those based on the generalized confidence interval approach. Comparison studies indicate that the proposed MOVER confidence bounds are better than the generalized confidence bounds for the overall mean exposure and an upper percentile of the exposure distribution. The proposed methods are illustrated using a few examples involving industrial hygiene data.

  11. Novel Real-Time Facial Wound Recovery Synthesis Using Subsurface Scattering

    PubMed Central

    Chin, Seongah

    2014-01-01

    We propose a wound recovery synthesis model that illustrates the appearance of a wound healing on a 3-dimensional (3D) face. The H3 model is used to determine the size of the recovering wound. Furthermore, we present our subsurface scattering model that is designed to take the multilayered skin structure of the wound into consideration to represent its color transformation. We also propose a novel real-time rendering method based on the results of an analysis of the characteristics of translucent materials. Finally, we validate the proposed methods with 3D wound-simulation experiments using shading models. PMID:25197721

  12. Jointly learning word embeddings using a corpus and a knowledge base

    PubMed Central

    Bollegala, Danushka; Maehara, Takanori; Kawarabayashi, Ken-ichi

    2018-01-01

    Methods for representing the meaning of words in vector spaces purely using the information distributed in text corpora have proved to be very valuable in various text mining and natural language processing (NLP) tasks. However, these methods still disregard the valuable semantic relational structure between words in co-occurring contexts. These beneficial semantic relational structures are contained in manually-created knowledge bases (KBs) such as ontologies and semantic lexicons, where the meanings of words are represented by defining the various relationships that exist among those words. We combine the knowledge in both a corpus and a KB to learn better word embeddings. Specifically, we propose a joint word representation learning method that uses the knowledge in the KBs, and simultaneously predicts the co-occurrences of two words in a corpus context. In particular, we use the corpus to define our objective function subject to the relational constrains derived from the KB. We further utilise the corpus co-occurrence statistics to propose two novel approaches, Nearest Neighbour Expansion (NNE) and Hedged Nearest Neighbour Expansion (HNE), that dynamically expand the KB and therefore derive more constraints that guide the optimisation process. Our experimental results over a wide-range of benchmark tasks demonstrate that the proposed method statistically significantly improves the accuracy of the word embeddings learnt. It outperforms a corpus-only baseline and reports an improvement of a number of previously proposed methods that incorporate corpora and KBs in both semantic similarity prediction and word analogy detection tasks. PMID:29529052

  13. Modulation Based on Probability Density Functions

    NASA Technical Reports Server (NTRS)

    Williams, Glenn L.

    2009-01-01

    A proposed method of modulating a sinusoidal carrier signal to convey digital information involves the use of histograms representing probability density functions (PDFs) that characterize samples of the signal waveform. The method is based partly on the observation that when a waveform is sampled (whether by analog or digital means) over a time interval at least as long as one half cycle of the waveform, the samples can be sorted by frequency of occurrence, thereby constructing a histogram representing a PDF of the waveform during that time interval.

  14. Deep learning and texture-based semantic label fusion for brain tumor segmentation

    NASA Astrophysics Data System (ADS)

    Vidyaratne, L.; Alam, M.; Shboul, Z.; Iftekharuddin, K. M.

    2018-02-01

    Brain tumor segmentation is a fundamental step in surgical treatment and therapy. Many hand-crafted and learning based methods have been proposed for automatic brain tumor segmentation from MRI. Studies have shown that these approaches have their inherent advantages and limitations. This work proposes a semantic label fusion algorithm by combining two representative state-of-the-art segmentation algorithms: texture based hand-crafted, and deep learning based methods to obtain robust tumor segmentation. We evaluate the proposed method using publicly available BRATS 2017 brain tumor segmentation challenge dataset. The results show that the proposed method offers improved segmentation by alleviating inherent weaknesses: extensive false positives in texture based method, and the false tumor tissue classification problem in deep learning method, respectively. Furthermore, we investigate the effect of patient's gender on the segmentation performance using a subset of validation dataset. Note the substantial improvement in brain tumor segmentation performance proposed in this work has recently enabled us to secure the first place by our group in overall patient survival prediction task at the BRATS 2017 challenge.

  15. Deep Learning and Texture-Based Semantic Label Fusion for Brain Tumor Segmentation.

    PubMed

    Vidyaratne, L; Alam, M; Shboul, Z; Iftekharuddin, K M

    2018-01-01

    Brain tumor segmentation is a fundamental step in surgical treatment and therapy. Many hand-crafted and learning based methods have been proposed for automatic brain tumor segmentation from MRI. Studies have shown that these approaches have their inherent advantages and limitations. This work proposes a semantic label fusion algorithm by combining two representative state-of-the-art segmentation algorithms: texture based hand-crafted, and deep learning based methods to obtain robust tumor segmentation. We evaluate the proposed method using publicly available BRATS 2017 brain tumor segmentation challenge dataset. The results show that the proposed method offers improved segmentation by alleviating inherent weaknesses: extensive false positives in texture based method, and the false tumor tissue classification problem in deep learning method, respectively. Furthermore, we investigate the effect of patient's gender on the segmentation performance using a subset of validation dataset. Note the substantial improvement in brain tumor segmentation performance proposed in this work has recently enabled us to secure the first place by our group in overall patient survival prediction task at the BRATS 2017 challenge.

  16. Drug-target interaction prediction via class imbalance-aware ensemble learning.

    PubMed

    Ezzat, Ali; Wu, Min; Li, Xiao-Li; Kwoh, Chee-Keong

    2016-12-22

    Multiple computational methods for predicting drug-target interactions have been developed to facilitate the drug discovery process. These methods use available data on known drug-target interactions to train classifiers with the purpose of predicting new undiscovered interactions. However, a key challenge regarding this data that has not yet been addressed by these methods, namely class imbalance, is potentially degrading the prediction performance. Class imbalance can be divided into two sub-problems. Firstly, the number of known interacting drug-target pairs is much smaller than that of non-interacting drug-target pairs. This imbalance ratio between interacting and non-interacting drug-target pairs is referred to as the between-class imbalance. Between-class imbalance degrades prediction performance due to the bias in prediction results towards the majority class (i.e. the non-interacting pairs), leading to more prediction errors in the minority class (i.e. the interacting pairs). Secondly, there are multiple types of drug-target interactions in the data with some types having relatively fewer members (or are less represented) than others. This variation in representation of the different interaction types leads to another kind of imbalance referred to as the within-class imbalance. In within-class imbalance, prediction results are biased towards the better represented interaction types, leading to more prediction errors in the less represented interaction types. We propose an ensemble learning method that incorporates techniques to address the issues of between-class imbalance and within-class imbalance. Experiments show that the proposed method improves results over 4 state-of-the-art methods. In addition, we simulated cases for new drugs and targets to see how our method would perform in predicting their interactions. New drugs and targets are those for which no prior interactions are known. Our method displayed satisfactory prediction performance and was able to predict many of the interactions successfully. Our proposed method has improved the prediction performance over the existing work, thus proving the importance of addressing problems pertaining to class imbalance in the data.

  17. Spline based least squares integration for two-dimensional shape or wavefront reconstruction

    DOE PAGES

    Huang, Lei; Xue, Junpeng; Gao, Bo; ...

    2016-12-21

    In this paper, we present a novel method to handle two-dimensional shape or wavefront reconstruction from its slopes. The proposed integration method employs splines to fit the measured slope data with piecewise polynomials and uses the analytical polynomial functions to represent the height changes in a lateral spacing with the pre-determined spline coefficients. The linear least squares method is applied to estimate the height or wavefront as a final result. Numerical simulations verify that the proposed method has less algorithm errors than two other existing methods used for comparison. Especially at the boundaries, the proposed method has better performance. Themore » noise influence is studied by adding white Gaussian noise to the slope data. Finally, experimental data from phase measuring deflectometry are tested to demonstrate the feasibility of the new method in a practical measurement.« less

  18. Simplified welding distortion analysis for fillet welding using composite shell elements

    NASA Astrophysics Data System (ADS)

    Kim, Mingyu; Kang, Minseok; Chung, Hyun

    2015-09-01

    This paper presents the simplified welding distortion analysis method to predict the welding deformation of both plate and stiffener in fillet welds. Currently, the methods based on equivalent thermal strain like Strain as Direct Boundary (SDB) has been widely used due to effective prediction of welding deformation. Regarding the fillet welding, however, those methods cannot represent deformation of both members at once since the temperature degree of freedom is shared at the intersection nodes in both members. In this paper, we propose new approach to simulate deformation of both members. The method can simulate fillet weld deformations by employing composite shell element and using different thermal expansion coefficients according to thickness direction with fixed temperature at intersection nodes. For verification purpose, we compare of result from experiments, 3D thermo elastic plastic analysis, SDB method and proposed method. Compared of experiments results, the proposed method can effectively predict welding deformation for fillet welds.

  19. Spline based least squares integration for two-dimensional shape or wavefront reconstruction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Lei; Xue, Junpeng; Gao, Bo

    In this paper, we present a novel method to handle two-dimensional shape or wavefront reconstruction from its slopes. The proposed integration method employs splines to fit the measured slope data with piecewise polynomials and uses the analytical polynomial functions to represent the height changes in a lateral spacing with the pre-determined spline coefficients. The linear least squares method is applied to estimate the height or wavefront as a final result. Numerical simulations verify that the proposed method has less algorithm errors than two other existing methods used for comparison. Especially at the boundaries, the proposed method has better performance. Themore » noise influence is studied by adding white Gaussian noise to the slope data. Finally, experimental data from phase measuring deflectometry are tested to demonstrate the feasibility of the new method in a practical measurement.« less

  20. A new modulated Hebbian learning rule--biologically plausible method for local computation of a principal subspace.

    PubMed

    Jankovic, Marko; Ogawa, Hidemitsu

    2003-08-01

    This paper presents one possible implementation of a transformation that performs linear mapping to a lower-dimensional subspace. Principal component subspace will be the one that will be analyzed. Idea implemented in this paper represents generalization of the recently proposed infinity OH neural method for principal component extraction. The calculations in the newly proposed method are performed locally--a feature which is usually considered as desirable from the biological point of view. Comparing to some other wellknown methods, proposed synaptic efficacy learning rule requires less information about the value of the other efficacies to make single efficacy modification. Synaptic efficacies are modified by implementation of Modulated Hebb-type (MH) learning rule. Slightly modified MH algorithm named Modulated Hebb Oja (MHO) algorithm, will be also introduced. Structural similarity of the proposed network with part of the retinal circuit will be presented, too.

  1. Image Recommendation Algorithm Using Feature-Based Collaborative Filtering

    NASA Astrophysics Data System (ADS)

    Kim, Deok-Hwan

    As the multimedia contents market continues its rapid expansion, the amount of image contents used in mobile phone services, digital libraries, and catalog service is increasing remarkably. In spite of this rapid growth, users experience high levels of frustration when searching for the desired image. Even though new images are profitable to the service providers, traditional collaborative filtering methods cannot recommend them. To solve this problem, in this paper, we propose feature-based collaborative filtering (FBCF) method to reflect the user's most recent preference by representing his purchase sequence in the visual feature space. The proposed approach represents the images that have been purchased in the past as the feature clusters in the multi-dimensional feature space and then selects neighbors by using an inter-cluster distance function between their feature clusters. Various experiments using real image data demonstrate that the proposed approach provides a higher quality recommendation and better performance than do typical collaborative filtering and content-based filtering techniques.

  2. A multiple distributed representation method based on neural network for biomedical event extraction.

    PubMed

    Wang, Anran; Wang, Jian; Lin, Hongfei; Zhang, Jianhai; Yang, Zhihao; Xu, Kan

    2017-12-20

    Biomedical event extraction is one of the most frontier domains in biomedical research. The two main subtasks of biomedical event extraction are trigger identification and arguments detection which can both be considered as classification problems. However, traditional state-of-the-art methods are based on support vector machine (SVM) with massive manually designed one-hot represented features, which require enormous work but lack semantic relation among words. In this paper, we propose a multiple distributed representation method for biomedical event extraction. The method combines context consisting of dependency-based word embedding, and task-based features represented in a distributed way as the input of deep learning models to train deep learning models. Finally, we used softmax classifier to label the example candidates. The experimental results on Multi-Level Event Extraction (MLEE) corpus show higher F-scores of 77.97% in trigger identification and 58.31% in overall compared to the state-of-the-art SVM method. Our distributed representation method for biomedical event extraction avoids the problems of semantic gap and dimension disaster from traditional one-hot representation methods. The promising results demonstrate that our proposed method is effective for biomedical event extraction.

  3. Force Rendering and its Evaluation of a Friction-Based Walking Sensation Display for a Seated User.

    PubMed

    Kato, Ginga; Kuroda, Yoshihiro; Kiyokawa, Kiyoshi; Takemura, Haruo

    2018-04-01

    Most existing locomotion devices that represent the sensation of walking target a user who is actually performing a walking motion. Here, we attempted to represent the walking sensation, especially a kinesthetic sensation and advancing feeling (the sense of moving forward) while the user remains seated. To represent the walking sensation using a relatively simple device, we focused on the force rendering and its evaluation of the longitudinal friction force applied on the sole during walking. Based on the measurement of the friction force applied on the sole during actual walking, we developed a novel friction force display that can present the friction force without the influence of body weight. Using performance evaluation testing, we found that the proposed method can stably and rapidly display friction force. Also, we developed a virtual reality (VR) walk-through system that is able to present the friction force through the proposed device according to the avatar's walking motion in a virtual world. By evaluating the realism, we found that the proposed device can represent a more realistic advancing feeling than vibration feedback.

  4. Dirac delta representation by exact parametric equations.. Application to impulsive vibration systems

    NASA Astrophysics Data System (ADS)

    Chicurel-Uziel, Enrique

    2007-08-01

    A pair of closed parametric equations are proposed to represent the Heaviside unit step function. Differentiating the step equations results in two additional parametric equations, that are also hereby proposed, to represent the Dirac delta function. These equations are expressed in algebraic terms and are handled by means of elementary algebra and elementary calculus. The proposed delta representation complies exactly with the values of the definition. It complies also with the sifting property and the requisite unit area and its Laplace transform coincides with the most general form given in the tables. Furthermore, it leads to a very simple method of solution of impulsive vibrating systems either linear or belonging to a large class of nonlinear problems. Two example solutions are presented.

  5. Hierarchical Recurrent Neural Hashing for Image Retrieval With Hierarchical Convolutional Features.

    PubMed

    Lu, Xiaoqiang; Chen, Yaxiong; Li, Xuelong

    Hashing has been an important and effective technology in image retrieval due to its computational efficiency and fast search speed. The traditional hashing methods usually learn hash functions to obtain binary codes by exploiting hand-crafted features, which cannot optimally represent the information of the sample. Recently, deep learning methods can achieve better performance, since deep learning architectures can learn more effective image representation features. However, these methods only use semantic features to generate hash codes by shallow projection but ignore texture details. In this paper, we proposed a novel hashing method, namely hierarchical recurrent neural hashing (HRNH), to exploit hierarchical recurrent neural network to generate effective hash codes. There are three contributions of this paper. First, a deep hashing method is proposed to extensively exploit both spatial details and semantic information, in which, we leverage hierarchical convolutional features to construct image pyramid representation. Second, our proposed deep network can exploit directly convolutional feature maps as input to preserve the spatial structure of convolutional feature maps. Finally, we propose a new loss function that considers the quantization error of binarizing the continuous embeddings into the discrete binary codes, and simultaneously maintains the semantic similarity and balanceable property of hash codes. Experimental results on four widely used data sets demonstrate that the proposed HRNH can achieve superior performance over other state-of-the-art hashing methods.Hashing has been an important and effective technology in image retrieval due to its computational efficiency and fast search speed. The traditional hashing methods usually learn hash functions to obtain binary codes by exploiting hand-crafted features, which cannot optimally represent the information of the sample. Recently, deep learning methods can achieve better performance, since deep learning architectures can learn more effective image representation features. However, these methods only use semantic features to generate hash codes by shallow projection but ignore texture details. In this paper, we proposed a novel hashing method, namely hierarchical recurrent neural hashing (HRNH), to exploit hierarchical recurrent neural network to generate effective hash codes. There are three contributions of this paper. First, a deep hashing method is proposed to extensively exploit both spatial details and semantic information, in which, we leverage hierarchical convolutional features to construct image pyramid representation. Second, our proposed deep network can exploit directly convolutional feature maps as input to preserve the spatial structure of convolutional feature maps. Finally, we propose a new loss function that considers the quantization error of binarizing the continuous embeddings into the discrete binary codes, and simultaneously maintains the semantic similarity and balanceable property of hash codes. Experimental results on four widely used data sets demonstrate that the proposed HRNH can achieve superior performance over other state-of-the-art hashing methods.

  6. Infrared face recognition based on LBP histogram and KW feature selection

    NASA Astrophysics Data System (ADS)

    Xie, Zhihua

    2014-07-01

    The conventional LBP-based feature as represented by the local binary pattern (LBP) histogram still has room for performance improvements. This paper focuses on the dimension reduction of LBP micro-patterns and proposes an improved infrared face recognition method based on LBP histogram representation. To extract the local robust features in infrared face images, LBP is chosen to get the composition of micro-patterns of sub-blocks. Based on statistical test theory, Kruskal-Wallis (KW) feature selection method is proposed to get the LBP patterns which are suitable for infrared face recognition. The experimental results show combination of LBP and KW features selection improves the performance of infrared face recognition, the proposed method outperforms the traditional methods based on LBP histogram, discrete cosine transform(DCT) or principal component analysis(PCA).

  7. Position Accuracy Analysis of a Robust Vision-Based Navigation

    NASA Astrophysics Data System (ADS)

    Gaglione, S.; Del Pizzo, S.; Troisi, S.; Angrisano, A.

    2018-05-01

    Using images to determine camera position and attitude is a consolidated method, very widespread for application like UAV navigation. In harsh environment, where GNSS could be degraded or denied, image-based positioning could represent a possible candidate for an integrated or alternative system. In this paper, such method is investigated using a system based on single camera and 3D maps. A robust estimation method is proposed in order to limit the effect of blunders or noisy measurements on position solution. The proposed approach is tested using images collected in an urban canyon, where GNSS positioning is very unaccurate. A previous photogrammetry survey has been performed to build the 3D model of tested area. The position accuracy analysis is performed and the effect of the robust method proposed is validated.

  8. Explicitly represented polygon wall boundary model for the explicit MPS method

    NASA Astrophysics Data System (ADS)

    Mitsume, Naoto; Yoshimura, Shinobu; Murotani, Kohei; Yamada, Tomonori

    2015-05-01

    This study presents an accurate and robust boundary model, the explicitly represented polygon (ERP) wall boundary model, to treat arbitrarily shaped wall boundaries in the explicit moving particle simulation (E-MPS) method, which is a mesh-free particle method for strong form partial differential equations. The ERP model expresses wall boundaries as polygons, which are explicitly represented without using the distance function. These are derived so that for viscous fluids, and with less computational cost, they satisfy the Neumann boundary condition for the pressure and the slip/no-slip condition on the wall surface. The proposed model is verified and validated by comparing computed results with the theoretical solution, results obtained by other models, and experimental results. Two simulations with complex boundary movements are conducted to demonstrate the applicability of the E-MPS method to the ERP model.

  9. A duality approach for solving bounded linear programming problems with fuzzy variables based on ranking functions and its application in bounded transportation problems

    NASA Astrophysics Data System (ADS)

    Ebrahimnejad, Ali

    2015-08-01

    There are several methods, in the literature, for solving fuzzy variable linear programming problems (fuzzy linear programming in which the right-hand-side vectors and decision variables are represented by trapezoidal fuzzy numbers). In this paper, the shortcomings of some existing methods are pointed out and to overcome these shortcomings a new method based on the bounded dual simplex method is proposed to determine the fuzzy optimal solution of that kind of fuzzy variable linear programming problems in which some or all variables are restricted to lie within lower and upper bounds. To illustrate the proposed method, an application example is solved and the obtained results are given. The advantages of the proposed method over existing methods are discussed. Also, one application of this algorithm in solving bounded transportation problems with fuzzy supplies and demands is dealt with. The proposed method is easy to understand and to apply for determining the fuzzy optimal solution of bounded fuzzy variable linear programming problems occurring in real-life situations.

  10. A new method for wind speed forecasting based on copula theory.

    PubMed

    Wang, Yuankun; Ma, Huiqun; Wang, Dong; Wang, Guizuo; Wu, Jichun; Bian, Jinyu; Liu, Jiufu

    2018-01-01

    How to determine representative wind speed is crucial in wind resource assessment. Accurate wind resource assessments are important to wind farms development. Linear regressions are usually used to obtain the representative wind speed. However, terrain flexibility of wind farm and long distance between wind speed sites often lead to low correlation. In this study, copula method is used to determine the representative year's wind speed in wind farm by interpreting the interaction of the local wind farm and the meteorological station. The result shows that the method proposed here can not only determine the relationship between the local anemometric tower and nearby meteorological station through Kendall's tau, but also determine the joint distribution without assuming the variables to be independent. Moreover, the representative wind data can be obtained by the conditional distribution much more reasonably. We hope this study could provide scientific reference for accurate wind resource assessments. Copyright © 2017 Elsevier Inc. All rights reserved.

  11. A Simulation Analysis of an Extension of One-Dimensional Speckle Correlation Method for Detection of General In-Plane Translation

    PubMed Central

    Hrabovský, Miroslav

    2014-01-01

    The purpose of the study is to show a proposal of an extension of a one-dimensional speckle correlation method, which is primarily intended for determination of one-dimensional object's translation, for detection of general in-plane object's translation. In that view, a numerical simulation of a displacement of the speckle field as a consequence of general in-plane object's translation is presented. The translation components a x and a y representing the projections of a vector a of the object's displacement onto both x- and y-axes in the object plane (x, y) are evaluated separately by means of the extended one-dimensional speckle correlation method. Moreover, one can perform a distinct optimization of the method by reduction of intensity values representing detected speckle patterns. The theoretical relations between the translation components a x and a y of the object and the displacement of the speckle pattern for selected geometrical arrangement are mentioned and used for the testifying of the proposed method's rightness. PMID:24592180

  12. Using formal methods to scope performance challenges for Smart Manufacturing Systems: focus on agility.

    PubMed

    Jung, Kiwook; Morris, K C; Lyons, Kevin W; Leong, Swee; Cho, Hyunbo

    2015-12-01

    Smart Manufacturing Systems (SMS) need to be agile to adapt to new situations by using detailed, precise, and appropriate data for intelligent decision-making. The intricacy of the relationship of strategic goals to operational performance across the many levels of a manufacturing system inhibits the realization of SMS. This paper proposes a method for identifying what aspects of a manufacturing system should be addressed to respond to changing strategic goals. The method uses standard modeling techniques in specifying a manufacturing system and the relationship between strategic goals and operational performance metrics. Two existing reference models related to manufacturing operations are represented formally and harmonized to support the proposed method. The method is illustrated for a single scenario using agility as a strategic goal. By replicating the proposed method for other strategic goals and with multiple scenarios, a comprehensive set of performance challenges can be identified.

  13. Using formal methods to scope performance challenges for Smart Manufacturing Systems: focus on agility

    PubMed Central

    Jung, Kiwook; Morris, KC; Lyons, Kevin W.; Leong, Swee; Cho, Hyunbo

    2016-01-01

    Smart Manufacturing Systems (SMS) need to be agile to adapt to new situations by using detailed, precise, and appropriate data for intelligent decision-making. The intricacy of the relationship of strategic goals to operational performance across the many levels of a manufacturing system inhibits the realization of SMS. This paper proposes a method for identifying what aspects of a manufacturing system should be addressed to respond to changing strategic goals. The method uses standard modeling techniques in specifying a manufacturing system and the relationship between strategic goals and operational performance metrics. Two existing reference models related to manufacturing operations are represented formally and harmonized to support the proposed method. The method is illustrated for a single scenario using agility as a strategic goal. By replicating the proposed method for other strategic goals and with multiple scenarios, a comprehensive set of performance challenges can be identified. PMID:27141209

  14. A modified precise integration method for transient dynamic analysis in structural systems with multiple damping models

    NASA Astrophysics Data System (ADS)

    Ding, Zhe; Li, Li; Hu, Yujin

    2018-01-01

    Sophisticated engineering systems are usually assembled by subcomponents with significantly different levels of energy dissipation. Therefore, these damping systems often contain multiple damping models and lead to great difficulties in analyzing. This paper aims at developing a time integration method for structural systems with multiple damping models. The dynamical system is first represented by a generally damped model. Based on this, a new extended state-space method for the damped system is derived. A modified precise integration method with Gauss-Legendre quadrature is then proposed. The numerical stability and accuracy of the proposed integration method are discussed in detail. It is verified that the method is conditionally stable and has inherent algorithmic damping, period error and amplitude decay. Numerical examples are provided to assess the performance of the proposed method compared with other methods. It is demonstrated that the method is more accurate than other methods with rather good efficiency and the stable condition is easy to be satisfied in practice.

  15. Identification of differential pathways in papillary thyroid carcinoma utilizing pathway co-expression analysis.

    PubMed

    Qiu, Wei-Hai; Chen, Gui-Yan; Cui, Lu; Zhang, Ting-Ming; Wei, Feng; Yang, Yong

    2016-01-01

    To identify differential pathways between papillary thyroid carcinoma (PTC) patients and normal controls utilizing a novel method which combined pathway with co-expression network. The proposed method included three steps. In the first step, we conducted pretreatments for background pathways and gained representative pathways in PTC. Subsequently, a co-expression network for representative pathways was constructed using empirical Bayes (EB) approach to assign a weight value for each pathway. Finally, random model was extracted to set the thresholds of identifying differential pathways. We obtained 1267 representative pathways and their weight values based on the co-expressed pathway network, and then by meeting the criterion (Weight > 0.0296), 87 differential pathways in total across PTC patients and normal controls were identified. The top three ranked differential pathways were CREB phosphorylation, attachment of GPI anchor to urokinase plasminogen activator receptor (uPAR) and loss of function of SMAD2/3 in cancer. In conclusion, we successfully identified differential pathways (such as CREB phosphorylation, attachment of GPI anchor to uPAR and post-translational modification: synthesis of GPI-anchored proteins) for PTC using the proposed pathway co-expression method, and these pathways might be potential biomarkers for target therapy and detection of PTC.

  16. Probabilistic topic modeling for the analysis and classification of genomic sequences

    PubMed Central

    2015-01-01

    Background Studies on genomic sequences for classification and taxonomic identification have a leading role in the biomedical field and in the analysis of biodiversity. These studies are focusing on the so-called barcode genes, representing a well defined region of the whole genome. Recently, alignment-free techniques are gaining more importance because they are able to overcome the drawbacks of sequence alignment techniques. In this paper a new alignment-free method for DNA sequences clustering and classification is proposed. The method is based on k-mers representation and text mining techniques. Methods The presented method is based on Probabilistic Topic Modeling, a statistical technique originally proposed for text documents. Probabilistic topic models are able to find in a document corpus the topics (recurrent themes) characterizing classes of documents. This technique, applied on DNA sequences representing the documents, exploits the frequency of fixed-length k-mers and builds a generative model for a training group of sequences. This generative model, obtained through the Latent Dirichlet Allocation (LDA) algorithm, is then used to classify a large set of genomic sequences. Results and conclusions We performed classification of over 7000 16S DNA barcode sequences taken from Ribosomal Database Project (RDP) repository, training probabilistic topic models. The proposed method is compared to the RDP tool and Support Vector Machine (SVM) classification algorithm in a extensive set of trials using both complete sequences and short sequence snippets (from 400 bp to 25 bp). Our method reaches very similar results to RDP classifier and SVM for complete sequences. The most interesting results are obtained when short sequence snippets are considered. In these conditions the proposed method outperforms RDP and SVM with ultra short sequences and it exhibits a smooth decrease of performance, at every taxonomic level, when the sequence length is decreased. PMID:25916734

  17. Combining 3d Volume and Mesh Models for Representing Complicated Heritage Buildings

    NASA Astrophysics Data System (ADS)

    Tsai, F.; Chang, H.; Lin, Y.-W.

    2017-08-01

    This study developed a simple but effective strategy to combine 3D volume and mesh models for representing complicated heritage buildings and structures. The idea is to seamlessly integrate 3D parametric or polyhedral models and mesh-based digital surfaces to generate a hybrid 3D model that can take advantages of both modeling methods. The proposed hybrid model generation framework is separated into three phases. Firstly, after acquiring or generating 3D point clouds of the target, these 3D points are partitioned into different groups. Secondly, a parametric or polyhedral model of each group is generated based on plane and surface fitting algorithms to represent the basic structure of that region. A "bare-bones" model of the target can subsequently be constructed by connecting all 3D volume element models. In the third phase, the constructed bare-bones model is used as a mask to remove points enclosed by the bare-bones model from the original point clouds. The remaining points are then connected to form 3D surface mesh patches. The boundary points of each surface patch are identified and these boundary points are projected onto the surfaces of the bare-bones model. Finally, new meshes are created to connect the projected points and original mesh boundaries to integrate the mesh surfaces with the 3D volume model. The proposed method was applied to an open-source point cloud data set and point clouds of a local historical structure. Preliminary results indicated that the reconstructed hybrid models using the proposed method can retain both fundamental 3D volume characteristics and accurate geometric appearance with fine details. The reconstructed hybrid models can also be used to represent targets in different levels of detail according to user and system requirements in different applications.

  18. The use and generation of illustrative examples in computer-based instructional systems

    NASA Technical Reports Server (NTRS)

    Selig, William John; Johannes, James D.

    1987-01-01

    A method is proposed whereby the underlying domain knowledge is represented such that illustrative examples may be generated on demand. This method has the advantage that the generated example can follow changes in the domain in addition to allowing automatic customization of the example to the individual.

  19. A swarm-trained k-nearest prototypes adaptive classifier with automatic feature selection for interval data.

    PubMed

    Silva Filho, Telmo M; Souza, Renata M C R; Prudêncio, Ricardo B C

    2016-08-01

    Some complex data types are capable of modeling data variability and imprecision. These data types are studied in the symbolic data analysis field. One such data type is interval data, which represents ranges of values and is more versatile than classic point data for many domains. This paper proposes a new prototype-based classifier for interval data, trained by a swarm optimization method. Our work has two main contributions: a swarm method which is capable of performing both automatic selection of features and pruning of unused prototypes and a generalized weighted squared Euclidean distance for interval data. By discarding unnecessary features and prototypes, the proposed algorithm deals with typical limitations of prototype-based methods, such as the problem of prototype initialization. The proposed distance is useful for learning classes in interval datasets with different shapes, sizes and structures. When compared to other prototype-based methods, the proposed method achieves lower error rates in both synthetic and real interval datasets. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Contact-free heart rate measurement using multiple video data

    NASA Astrophysics Data System (ADS)

    Hung, Pang-Chan; Lee, Kual-Zheng; Tsai, Luo-Wei

    2013-10-01

    In this paper, we propose a contact-free heart rate measurement method by analyzing sequential images of multiple video data. In the proposed method, skin-like pixels are firstly detected from multiple video data for extracting the color features. These color features are synchronized and analyzed by independent component analysis. A representative component is finally selected among these independent component candidates to measure the HR, which achieves under 2% deviation on average compared with a pulse oximeter in the controllable environment. The advantages of the proposed method include: 1) it uses low cost and high accessibility camera device; 2) it eases users' discomfort by utilizing contact-free measurement; and 3) it achieves the low error rate and the high stability by integrating multiple video data.

  1. On the Use of a Signal Quality Index Applying at Tracking Stage Level to Assist the RAIM System of a GNSS Receiver.

    PubMed

    Berardo, Mattia; Lo Presti, Letizia

    2016-07-02

    In this work, a novel signal processing method is proposed to assist the Receiver Autonomous Integrity Monitoring (RAIM) module used in a receiver of Global Navigation Satellite Systems (GNSS) to improve the integrity of the estimated position. The proposed technique represents an evolution of the Multipath Distance Detector (MPDD), thanks to the introduction of a Signal Quality Index (SQI), which is both a metric able to evaluate the goodness of the signal, and a parameter used to improve the performance of the RAIM modules. Simulation results show the effectiveness of the proposed method.

  2. The graph neural network model.

    PubMed

    Scarselli, Franco; Gori, Marco; Tsoi, Ah Chung; Hagenbuchner, Markus; Monfardini, Gabriele

    2009-01-01

    Many underlying relationships among data in several areas of science and engineering, e.g., computer vision, molecular chemistry, molecular biology, pattern recognition, and data mining, can be represented in terms of graphs. In this paper, we propose a new neural network model, called graph neural network (GNN) model, that extends existing neural network methods for processing the data represented in graph domains. This GNN model, which can directly process most of the practically useful types of graphs, e.g., acyclic, cyclic, directed, and undirected, implements a function tau(G,n) is an element of IR(m) that maps a graph G and one of its nodes n into an m-dimensional Euclidean space. A supervised learning algorithm is derived to estimate the parameters of the proposed GNN model. The computational cost of the proposed algorithm is also considered. Some experimental results are shown to validate the proposed learning algorithm, and to demonstrate its generalization capabilities.

  3. Efficient dynamic graph construction for inductive semi-supervised learning.

    PubMed

    Dornaika, F; Dahbi, R; Bosaghzadeh, A; Ruichek, Y

    2017-10-01

    Most of graph construction techniques assume a transductive setting in which the whole data collection is available at construction time. Addressing graph construction for inductive setting, in which data are coming sequentially, has received much less attention. For inductive settings, constructing the graph from scratch can be very time consuming. This paper introduces a generic framework that is able to make any graph construction method incremental. This framework yields an efficient and dynamic graph construction method that adds new samples (labeled or unlabeled) to a previously constructed graph. As a case study, we use the recently proposed Two Phase Weighted Regularized Least Square (TPWRLS) graph construction method. The paper has two main contributions. First, we use the TPWRLS coding scheme to represent new sample(s) with respect to an existing database. The representative coefficients are then used to update the graph affinity matrix. The proposed method not only appends the new samples to the graph but also updates the whole graph structure by discovering which nodes are affected by the introduction of new samples and by updating their edge weights. The second contribution of the article is the application of the proposed framework to the problem of graph-based label propagation using multiple observations for vision-based recognition tasks. Experiments on several image databases show that, without any significant loss in the accuracy of the final classification, the proposed dynamic graph construction is more efficient than the batch graph construction. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Establishment method of a mixture model and its practical application for transmission gears in an engineering vehicle

    NASA Astrophysics Data System (ADS)

    Wang, Jixin; Wang, Zhenyu; Yu, Xiangjun; Yao, Mingyao; Yao, Zongwei; Zhang, Erping

    2012-09-01

    Highly versatile machines, such as wheel loaders, forklifts, and mining haulers, are subject to many kinds of working conditions, as well as indefinite factors that lead to the complexity of the load. The load probability distribution function (PDF) of transmission gears has many distributions centers; thus, its PDF cannot be well represented by just a single-peak function. For the purpose of representing the distribution characteristics of the complicated phenomenon accurately, this paper proposes a novel method to establish a mixture model. Based on linear regression models and correlation coefficients, the proposed method can be used to automatically select the best-fitting function in the mixture model. Coefficient of determination, the mean square error, and the maximum deviation are chosen and then used as judging criteria to describe the fitting precision between the theoretical distribution and the corresponding histogram of the available load data. The applicability of this modeling method is illustrated by the field testing data of a wheel loader. Meanwhile, the load spectra based on the mixture model are compiled. The comparison results show that the mixture model is more suitable for the description of the load-distribution characteristics. The proposed research improves the flexibility and intelligence of modeling, reduces the statistical error and enhances the fitting accuracy, and the load spectra complied by this method can better reflect the actual load characteristic of the gear component.

  5. Writing Abstracts for MLIS Research Proposals Using Worked Examples: An Innovative Approach to Teaching the Elements of Research Design

    ERIC Educational Resources Information Center

    Ondrusek, Anita L.; Thiele, Harold E.; Yang, Changwoo

    2014-01-01

    The authors examined abstracts written by graduate students for their research proposals as a requirement for a course in research methods in a distance learning MLIS program. The students learned under three instructional conditions that involved varying levels of access to worked examples created from abstracts representing research in the LIS…

  6. A channel differential EZW coding scheme for EEG data compression.

    PubMed

    Dehkordi, Vahid R; Daou, Hoda; Labeau, Fabrice

    2011-11-01

    In this paper, a method is proposed to compress multichannel electroencephalographic (EEG) signals in a scalable fashion. Correlation between EEG channels is exploited through clustering using a k-means method. Representative channels for each of the clusters are encoded individually while other channels are encoded differentially, i.e., with respect to their respective cluster representatives. The compression is performed using the embedded zero-tree wavelet encoding adapted to 1-D signals. Simulations show that the scalable features of the scheme lead to a flexible quality/rate tradeoff, without requiring detailed EEG signal modeling.

  7. The review and results of different methods for facial recognition

    NASA Astrophysics Data System (ADS)

    Le, Yifan

    2017-09-01

    In recent years, facial recognition draws much attention due to its wide potential applications. As a unique technology in Biometric Identification, facial recognition represents a significant improvement since it could be operated without cooperation of people under detection. Hence, facial recognition will be taken into defense system, medical detection, human behavior understanding, etc. Several theories and methods have been established to make progress in facial recognition: (1) A novel two-stage facial landmark localization method is proposed which has more accurate facial localization effect under specific database; (2) A statistical face frontalization method is proposed which outperforms state-of-the-art methods for face landmark localization; (3) It proposes a general facial landmark detection algorithm to handle images with severe occlusion and images with large head poses; (4) There are three methods proposed on Face Alignment including shape augmented regression method, pose-indexed based multi-view method and a learning based method via regressing local binary features. The aim of this paper is to analyze previous work of different aspects in facial recognition, focusing on concrete method and performance under various databases. In addition, some improvement measures and suggestions in potential applications will be put forward.

  8. Linguistic multi-criteria decision-making with representing semantics by programming

    NASA Astrophysics Data System (ADS)

    Yang, Wu-E.; Ma, Chao-Qun; Han, Zhi-Qiu

    2017-01-01

    A linguistic multi-criteria decision-making method is introduced. In this method, a maximising discrimination programming assigns the semanteme values to linguistic variables to represent their semantics. Incomplete preferences from using linguistic information are expressed by the constraints of the model. Such assignment can amplify the difference between alternatives. Thus, the discrimination of the decision model is increased, which facilitates the decision-maker to rank or order the alternatives for making a decision. We also discuss the parameter setting and its influence, and use an application example to illustrate the proposed method. Further, the results with three types of semantic structure highlight the ability of the method in handling different semantic structures.

  9. Simplified neutrosophic sets and their applications in multi-criteria group decision-making problems

    NASA Astrophysics Data System (ADS)

    Peng, Juan-juan; Wang, Jian-qiang; Wang, Jing; Zhang, Hong-yu; Chen, Xiao-hong

    2016-07-01

    As a variation of fuzzy sets and intuitionistic fuzzy sets, neutrosophic sets have been developed to represent uncertain, imprecise, incomplete and inconsistent information that exists in the real world. Simplified neutrosophic sets (SNSs) have been proposed for the main purpose of addressing issues with a set of specific numbers. However, there are certain problems regarding the existing operations of SNSs, as well as their aggregation operators and the comparison methods. Therefore, this paper defines the novel operations of simplified neutrosophic numbers (SNNs) and develops a comparison method based on the related research of intuitionistic fuzzy numbers. On the basis of these operations and the comparison method, some SNN aggregation operators are proposed. Additionally, an approach for multi-criteria group decision-making (MCGDM) problems is explored by applying these aggregation operators. Finally, an example to illustrate the applicability of the proposed method is provided and a comparison with some other methods is made.

  10. Joint sparse representation for robust multimodal biometrics recognition.

    PubMed

    Shekhar, Sumit; Patel, Vishal M; Nasrabadi, Nasser M; Chellappa, Rama

    2014-01-01

    Traditional biometric recognition systems rely on a single biometric signature for authentication. While the advantage of using multiple sources of information for establishing the identity has been widely recognized, computational models for multimodal biometrics recognition have only recently received attention. We propose a multimodal sparse representation method, which represents the test data by a sparse linear combination of training data, while constraining the observations from different modalities of the test subject to share their sparse representations. Thus, we simultaneously take into account correlations as well as coupling information among biometric modalities. A multimodal quality measure is also proposed to weigh each modality as it gets fused. Furthermore, we also kernelize the algorithm to handle nonlinearity in data. The optimization problem is solved using an efficient alternative direction method. Various experiments show that the proposed method compares favorably with competing fusion-based methods.

  11. Rolling bearing fault diagnosis and health assessment using EEMD and the adjustment Mahalanobis-Taguchi system

    NASA Astrophysics Data System (ADS)

    Chen, Junxun; Cheng, Longsheng; Yu, Hui; Hu, Shaolin

    2018-01-01

    ABSTRACTSFor the timely identification of the potential faults of a rolling bearing and to observe its health condition intuitively and accurately, a novel fault diagnosis and health assessment model for a rolling bearing based on the ensemble empirical mode decomposition (EEMD) method and the adjustment Mahalanobis-Taguchi system (AMTS) method is proposed. The specific steps are as follows: First, the vibration signal of a rolling bearing is decomposed by EEMD, and the extracted features are used as the input vectors of AMTS. Then, the AMTS method, which is designed to overcome the shortcomings of the traditional Mahalanobis-Taguchi system and to extract the key features, is proposed for fault diagnosis. Finally, a type of HI concept is proposed according to the results of the fault diagnosis to accomplish the health assessment of a bearing in its life cycle. To validate the superiority of the developed method proposed approach, it is compared with other recent method and proposed methodology is successfully validated on a vibration data-set acquired from seeded defects and from an accelerated life test. The results show that this method represents the actual situation well and is able to accurately and effectively identify the fault type.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015OptEn..54b3106P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015OptEn..54b3106P"><span>Real-time line matching from stereo images using a nonparametric transform of spatial relations and texture information</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Park, Jonghee; Yoon, Kuk-Jin</p> <p>2015-02-01</p> <p>We propose a real-time line matching method for stereo systems. To achieve real-time performance while retaining a high level of matching precision, we first propose a nonparametric transform to represent the spatial relations between neighboring lines and nearby textures as a binary stream. Since the length of a line can vary across images, the matching costs between lines are computed within an overlap area (OA) based on the binary stream. The OA is determined for each line pair by employing the properties of a rectified image pair. Finally, the line correspondence is determined using a winner-takes-all method with a left-right consistency check. To reduce the computational time requirements further, we filter out unreliable matching candidates in advance based on their rectification properties. The performance of the proposed method was compared with state-of-the-art methods in terms of the computational time, matching precision, and recall. The proposed method required 47 ms to match lines from an image pair in the KITTI dataset with an average precision of 95%. We also verified the proposed method under image blur, illumination variation, and viewpoint changes.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3395871','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3395871"><span>Particle-based simulation of charge transport in discrete-charge nano-scale systems: the electrostatic problem</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p></p> <p>2012-01-01</p> <p>The fast and accurate computation of the electric forces that drive the motion of charged particles at the nanometer scale represents a computational challenge. For this kind of system, where the discrete nature of the charges cannot be neglected, boundary element methods (BEM) represent a better approach than finite differences/finite elements methods. In this article, we compare two different BEM approaches to a canonical electrostatic problem in a three-dimensional space with inhomogeneous dielectrics, emphasizing their suitability for particle-based simulations: the iterative method proposed by Hoyles et al. and the Induced Charge Computation introduced by Boda et al. PMID:22338640</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/22338640','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/22338640"><span>Particle-based simulation of charge transport in discrete-charge nano-scale systems: the electrostatic problem.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Berti, Claudio; Gillespie, Dirk; Eisenberg, Robert S; Fiegna, Claudio</p> <p>2012-02-16</p> <p>The fast and accurate computation of the electric forces that drive the motion of charged particles at the nanometer scale represents a computational challenge. For this kind of system, where the discrete nature of the charges cannot be neglected, boundary element methods (BEM) represent a better approach than finite differences/finite elements methods. In this article, we compare two different BEM approaches to a canonical electrostatic problem in a three-dimensional space with inhomogeneous dielectrics, emphasizing their suitability for particle-based simulations: the iterative method proposed by Hoyles et al. and the Induced Charge Computation introduced by Boda et al.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://files.eric.ed.gov/fulltext/ED207379.pdf','ERIC'); return false;" href="http://files.eric.ed.gov/fulltext/ED207379.pdf"><span>Some Notes on a Permanent Diaconate in the United Methodist Church. Occasional Papers No. 36.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Grimes, Howard</p> <p></p> <p>The permanent order of deacon is proposed in the United Methodist Church. The following are suggestions for governing representative ministry so it can be structured to enliven and renew the life of United Methodism: (1) any view of diaconal and other representative ministries must be seen in relation to the general ministry of all Christians; (2)…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25052018','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25052018"><span>Optimal back-extrapolation method for estimating plasma volume in humans using the indocyanine green dilution method.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Polidori, David; Rowley, Clarence</p> <p>2014-07-22</p> <p>The indocyanine green dilution method is one of the methods available to estimate plasma volume, although some researchers have questioned the accuracy of this method. We developed a new, physiologically based mathematical model of indocyanine green kinetics that more accurately represents indocyanine green kinetics during the first few minutes postinjection than what is assumed when using the traditional mono-exponential back-extrapolation method. The mathematical model is used to develop an optimal back-extrapolation method for estimating plasma volume based on simulated indocyanine green kinetics obtained from the physiological model. Results from a clinical study using the indocyanine green dilution method in 36 subjects with type 2 diabetes indicate that the estimated plasma volumes are considerably lower when using the traditional back-extrapolation method than when using the proposed back-extrapolation method (mean (standard deviation) plasma volume = 26.8 (5.4) mL/kg for the traditional method vs 35.1 (7.0) mL/kg for the proposed method). The results obtained using the proposed method are more consistent with previously reported plasma volume values. Based on the more physiological representation of indocyanine green kinetics and greater consistency with previously reported plasma volume values, the new back-extrapolation method is proposed for use when estimating plasma volume using the indocyanine green dilution method.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017MSSP...96....1S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017MSSP...96....1S"><span>Adaptive torque estimation of robot joint with harmonic drive transmission</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Shi, Zhiguo; Li, Yuankai; Liu, Guangjun</p> <p>2017-11-01</p> <p>Robot joint torque estimation using input and output position measurements is a promising technique, but the result may be affected by the load variation of the joint. In this paper, a torque estimation method with adaptive robustness and optimality adjustment according to load variation is proposed for robot joint with harmonic drive transmission. Based on a harmonic drive model and a redundant adaptive robust Kalman filter (RARKF), the proposed approach can adapt torque estimation filtering optimality and robustness to the load variation by self-tuning the filtering gain and self-switching the filtering mode between optimal and robust. The redundant factor of RARKF is designed as a function of the motor current for tolerating the modeling error and load-dependent filtering mode switching. The proposed joint torque estimation method has been experimentally studied in comparison with a commercial torque sensor and two representative filtering methods. The results have demonstrated the effectiveness of the proposed torque estimation technique.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2007AIPC..907.1484O','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2007AIPC..907.1484O"><span>Possibilities of Particle Finite Element Methods in Industrial Forming Processes</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Oliver, J.; Cante, J. C.; Weyler, R.; Hernandez, J.</p> <p>2007-04-01</p> <p>The work investigates the possibilities offered by the particle finite element method (PFEM) in the simulation of forming problems involving large deformations, multiple contacts, and new boundaries generation. The description of the most distinguishing aspects of the PFEM, and its application to simulation of representative forming processes, illustrate the proposed methodology.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19950065258&hterms=cryptography&qs=N%3D0%26Ntk%3DAll%26Ntx%3Dmode%2Bmatchall%26Ntt%3Dcryptography','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19950065258&hterms=cryptography&qs=N%3D0%26Ntk%3DAll%26Ntx%3Dmode%2Bmatchall%26Ntt%3Dcryptography"><span>Cryptography Would Reveal Alterations In Photographs</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Friedman, Gary L.</p> <p>1995-01-01</p> <p>Public-key decryption method proposed to guarantee authenticity of photographic images represented in form of digital files. In method, digital camera generates original data from image in standard public format; also produces coded signature to verify standard-format image data. Scheme also helps protect against other forms of lying, such as attaching false captions.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.gpo.gov/fdsys/pkg/FR-2011-03-31/pdf/2011-7581.pdf','FEDREG'); return false;" href="https://www.gpo.gov/fdsys/pkg/FR-2011-03-31/pdf/2011-7581.pdf"><span>76 FR 17846 - Objective Merit Review of Discretionary Financial Assistance and Other Transaction Authority...</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.gpo.gov/fdsys/browse/collection.action?collectionCode=FR">Federal Register 2010, 2011, 2012, 2013, 2014</a></p> <p></p> <p>2011-03-31</p> <p>... initiative, and the idea, method or approach would be ineligible for assistance under a recent, current, or... strengths and weaknesses of the applications. ii. An overall consensus rating will be determined for each... unsolicited proposal and represents a unique or innovative idea, method, or approach which would not be...</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_4");'>4</a></li> <li><a href="#" onclick='return showDiv("page_5");'>5</a></li> <li class="active"><span>6</span></li> <li><a href="#" onclick='return showDiv("page_7");'>7</a></li> <li><a href="#" onclick='return showDiv("page_8");'>8</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_6 --> <div id="page_7" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_5");'>5</a></li> <li><a href="#" onclick='return showDiv("page_6");'>6</a></li> <li class="active"><span>7</span></li> <li><a href="#" onclick='return showDiv("page_8");'>8</a></li> <li><a href="#" onclick='return showDiv("page_9");'>9</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="121"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2008aict.book...47P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2008aict.book...47P"><span>Application of Classification Methods for Forecasting Mid-Term Power Load Patterns</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Piao, Minghao; Lee, Heon Gyu; Park, Jin Hyoung; Ryu, Keun Ho</p> <p></p> <p>Currently an automated methodology based on data mining techniques is presented for the prediction of customer load patterns in long duration load profiles. The proposed approach in this paper consists of three stages: (i) data preprocessing: noise or outlier is removed and the continuous attribute-valued features are transformed to discrete values, (ii) cluster analysis: k-means clustering is used to create load pattern classes and the representative load profiles for each class and (iii) classification: we evaluated several supervised learning methods in order to select a suitable prediction method. According to the proposed methodology, power load measured from AMR (automatic meter reading) system, as well as customer indexes, were used as inputs for clustering. The output of clustering was the classification of representative load profiles (or classes). In order to evaluate the result of forecasting load patterns, the several classification methods were applied on a set of high voltage customers of the Korea power system and derived class labels from clustering and other features are used as input to produce classifiers. Lastly, the result of our experiments was presented.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20140000603','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20140000603"><span>Floating Node Method and Virtual Crack Closure Technique for Modeling Matrix Cracking-Delamination Interaction</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>DeCarvalho, N. V.; Chen, B. Y.; Pinho, S. T.; Baiz, P. M.; Ratcliffe, J. G.; Tay, T. E.</p> <p>2013-01-01</p> <p>A novel approach is proposed for high-fidelity modeling of progressive damage and failure in composite materials that combines the Floating Node Method (FNM) and the Virtual Crack Closure Technique (VCCT) to represent multiple interacting failure mechanisms in a mesh-independent fashion. In this study, the approach is applied to the modeling of delamination migration in cross-ply tape laminates. Delamination, matrix cracking, and migration are all modeled using fracture mechanics based failure and migration criteria. The methodology proposed shows very good qualitative and quantitative agreement with experiments.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20140000837','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20140000837"><span>Floating Node Method and Virtual Crack Closure Technique for Modeling Matrix Cracking-Delamination Migration</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>DeCarvalho, Nelson V.; Chen, B. Y.; Pinho, Silvestre T.; Baiz, P. M.; Ratcliffe, James G.; Tay, T. E.</p> <p>2013-01-01</p> <p>A novel approach is proposed for high-fidelity modeling of progressive damage and failure in composite materials that combines the Floating Node Method (FNM) and the Virtual Crack Closure Technique (VCCT) to represent multiple interacting failure mechanisms in a mesh-independent fashion. In this study, the approach is applied to the modeling of delamination migration in cross-ply tape laminates. Delamination, matrix cracking, and migration are all modeled using fracture mechanics based failure and migration criteria. The methodology proposed shows very good qualitative and quantitative agreement with experiments.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24996899','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24996899"><span>Treatment selection in a randomized clinical trial via covariate-specific treatment effect curves.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Ma, Yunbei; Zhou, Xiao-Hua</p> <p>2017-02-01</p> <p>For time-to-event data in a randomized clinical trial, we proposed two new methods for selecting an optimal treatment for a patient based on the covariate-specific treatment effect curve, which is used to represent the clinical utility of a predictive biomarker. To select an optimal treatment for a patient with a specific biomarker value, we proposed pointwise confidence intervals for each covariate-specific treatment effect curve and the difference between covariate-specific treatment effect curves of two treatments. Furthermore, to select an optimal treatment for a future biomarker-defined subpopulation of patients, we proposed confidence bands for each covariate-specific treatment effect curve and the difference between each pair of covariate-specific treatment effect curve over a fixed interval of biomarker values. We constructed the confidence bands based on a resampling technique. We also conducted simulation studies to evaluate finite-sample properties of the proposed estimation methods. Finally, we illustrated the application of the proposed method in a real-world data set.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2008PhDT........71M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2008PhDT........71M"><span>Modeling of trim panels in the energy finite element analysis</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Moravaeji, Seyed-Javid</p> <p></p> <p>Modeling a trim panel is divided into finding the power exchange through two different paths: (i) the connection of the outer and inner panels (ii) through the layers directly. The vibrational power exchanged through the mounts is modeled as the connection of two parallel plates connected via a beam. Wave matrices representing plates and beams are derived separately; then a matrix method is proposed to solve for the wave amplitudes and hence the vibrational power exchange between the plates accordingly. A closed form formula for the case of connection of two identical plates is derived. For the power transmission loss directly through the layers, first transfer matrices representing layers made of different materials is considered. New matrices for a porous layer are derived. A method of finding the layered structure transfer matrix is proposed. It is concluded that in general a single isotropic layer cannot replace a structure accurately. Finally, on the basis of an equivalent transfer matrix, an optimization process for is proposed to replace the panel by a suitable set of layers.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5426918','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5426918"><span>Zero-Sum Matrix Game with Payoffs of Dempster-Shafer Belief Structures and Its Applications on Sensors</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Deng, Xinyang; Jiang, Wen; Zhang, Jiandong</p> <p>2017-01-01</p> <p>The zero-sum matrix game is one of the most classic game models, and it is widely used in many scientific and engineering fields. In the real world, due to the complexity of the decision-making environment, sometimes the payoffs received by players may be inexact or uncertain, which requires that the model of matrix games has the ability to represent and deal with imprecise payoffs. To meet such a requirement, this paper develops a zero-sum matrix game model with Dempster–Shafer belief structure payoffs, which effectively represents the ambiguity involved in payoffs of a game. Then, a decomposition method is proposed to calculate the value of such a game, which is also expressed with belief structures. Moreover, for the possible computation-intensive issue in the proposed decomposition method, as an alternative solution, a Monte Carlo simulation approach is presented, as well. Finally, the proposed zero-sum matrix games with payoffs of Dempster–Shafer belief structures is illustratively applied to the sensor selection and intrusion detection of sensor networks, which shows its effectiveness and application process. PMID:28430156</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29082761','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29082761"><span>Brain MR image segmentation using NAMS in pseudo-color.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Li, Hua; Chen, Chuanbo; Fang, Shaohong; Zhao, Shengrong</p> <p>2017-12-01</p> <p>Image segmentation plays a crucial role in various biomedical applications. In general, the segmentation of brain Magnetic Resonance (MR) images is mainly used to represent the image with several homogeneous regions instead of pixels for surgical analyzing and planning. This paper proposes a new approach for segmenting MR brain images by using pseudo-color based segmentation with Non-symmetry and Anti-packing Model with Squares (NAMS). First of all, the NAMS model is presented. The model can represent the image with sub-patterns to keep the image content and largely reduce the data redundancy. Second, the key idea is proposed that convert the original gray-scale brain MR image into a pseudo-colored image and then segment the pseudo-colored image with NAMS model. The pseudo-colored image can enhance the color contrast in different tissues in brain MR images, which can improve the precision of segmentation as well as directly visual perceptional distinction. Experimental results indicate that compared with other brain MR image segmentation methods, the proposed NAMS based pseudo-color segmentation method performs more excellent in not only segmenting precisely but also saving storage.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=Acronyms&pg=3&id=EJ783637','ERIC'); return false;" href="https://eric.ed.gov/?q=Acronyms&pg=3&id=EJ783637"><span>Are Your Students Critically Reading an Opinion Piece? Have Them RATTKISS It!</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Snair, Scott</p> <p>2008-01-01</p> <p>Scott Snair proposes a mnemonic for students to use when critically examining written opinion. The acronym, RATTKISS, represents a "step-by-step method for understanding and evaluating written opinion." (Contains 1 figure.)</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/19102105','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/19102105"><span>[Costs of operation of an urban reinforced community health post (Senegal)].</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Guèye, Dramé B; Gueye, A S; Wone, I; Sall, F L; Tal, Dia A</p> <p>2007-01-01</p> <p>In Senegal, the public funding in community health post is low. Resources are mainly obtained from the sale of services. The aim of this study is to analyse the operating cost of a community health post and to propose a relevant tariffing that would assure sustainable activities. We used the method of complete costs. It comes out from our study that the total cost is 20 870 920F. Wages represent 70% of total expenses, operating costs represent 27% and 4% are investment. The public funding represents a value of 12 257 325F (60% of the total) in which 88% correspond to expenses induced by civil servant wages. The health committee participates for 33% and the other participants (7%). At the end of our study, a sustainable and social tariffing, was proposed.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23428648','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23428648"><span>Multivariate temporal dictionary learning for EEG.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Barthélemy, Q; Gouy-Pailler, C; Isaac, Y; Souloumiac, A; Larue, A; Mars, J I</p> <p>2013-04-30</p> <p>This article addresses the issue of representing electroencephalographic (EEG) signals in an efficient way. While classical approaches use a fixed Gabor dictionary to analyze EEG signals, this article proposes a data-driven method to obtain an adapted dictionary. To reach an efficient dictionary learning, appropriate spatial and temporal modeling is required. Inter-channels links are taken into account in the spatial multivariate model, and shift-invariance is used for the temporal model. Multivariate learned kernels are informative (a few atoms code plentiful energy) and interpretable (the atoms can have a physiological meaning). Using real EEG data, the proposed method is shown to outperform the classical multichannel matching pursuit used with a Gabor dictionary, as measured by the representative power of the learned dictionary and its spatial flexibility. Moreover, dictionary learning can capture interpretable patterns: this ability is illustrated on real data, learning a P300 evoked potential. Copyright © 2013 Elsevier B.V. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27184318','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27184318"><span>Using archetypes to create user panels for usability studies: Streamlining focus groups and user studies.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Stavrakos, S-K; Ahmed-Kristensen, S; Goldman, T</p> <p>2016-09-01</p> <p>Designers at the conceptual phase of products such as headphones, stress the importance of comfort, e.g. executing comfort studies and the need for a reliable user panel. This paper proposes a methodology to issue a reliable user panel to represent large populations and validates the proposed framework to predict comfort factors, such as physical fit. Data of 200 heads was analyzed by forming clusters, 9 archetypal people were identified out of a 200 people's ear database. The archetypes were validated by comparing the archetypes' responses on physical fit against those of 20 participants interacting with 6 headsets. This paper suggests a new method of selecting representative user samples for prototype testing compared to costly and time consuming methods which relied on the analysis of human geometry of large populations. Copyright © 2016 Elsevier Ltd. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2232756','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2232756"><span>Evaluation of a proposed method for representing drug terminology.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Cimino, J. J.; McNamara, T. J.; Meredith, T.; Broverman, C. A.; Eckert, K. C.; Moore, M.; Tyree, D. J.</p> <p>1999-01-01</p> <p>In the absence of a single, standard, multipurpose terminology for representing medications, the HL7 Vocabulary Technical Committee has sought to develop a model for such terms in a way that will provide a unified method for representing them and supporting interoperability among various terminology systems. We evaluated the preliminary model by obtaining terms, represented in our model, from three leading vendors of pharmacy system knowledge bases. A total of 2303 terms were obtained, and 3982 pair-wise comparisons were possible. We found that the components of the term descriptions matched 68-87% of the time and that the overall descriptions matched 53% of the time. The evaluation has identified a number of areas in the model where more rigorous definitions will be needed in order to improve the matching rate. This paper discusses the implications of these results. PMID:10566318</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28591147','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28591147"><span>Jaccard distance based weighted sparse representation for coarse-to-fine plant species recognition.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Zhang, Shanwen; Wu, Xiaowei; You, Zhuhong</p> <p>2017-01-01</p> <p>Leaf based plant species recognition plays an important role in ecological protection, however its application to large and modern leaf databases has been a long-standing obstacle due to the computational cost and feasibility. Recognizing such limitations, we propose a Jaccard distance based sparse representation (JDSR) method which adopts a two-stage, coarse to fine strategy for plant species recognition. In the first stage, we use the Jaccard distance between the test sample and each training sample to coarsely determine the candidate classes of the test sample. The second stage includes a Jaccard distance based weighted sparse representation based classification(WSRC), which aims to approximately represent the test sample in the training space, and classify it by the approximation residuals. Since the training model of our JDSR method involves much fewer but more informative representatives, this method is expected to overcome the limitation of high computational and memory costs in traditional sparse representation based classification. Comparative experimental results on a public leaf image database demonstrate that the proposed method outperforms other existing feature extraction and SRC based plant recognition methods in terms of both accuracy and computational speed.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26445039','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26445039"><span>A Quantum-Based Similarity Method in Virtual Screening.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Al-Dabbagh, Mohammed Mumtaz; Salim, Naomie; Himmat, Mubarak; Ahmed, Ali; Saeed, Faisal</p> <p>2015-10-02</p> <p>One of the most widely-used techniques for ligand-based virtual screening is similarity searching. This study adopted the concepts of quantum mechanics to present as state-of-the-art similarity method of molecules inspired from quantum theory. The representation of molecular compounds in mathematical quantum space plays a vital role in the development of quantum-based similarity approach. One of the key concepts of quantum theory is the use of complex numbers. Hence, this study proposed three various techniques to embed and to re-represent the molecular compounds to correspond with complex numbers format. The quantum-based similarity method that developed in this study depending on complex pure Hilbert space of molecules called Standard Quantum-Based (SQB). The recall of retrieved active molecules were at top 1% and top 5%, and significant test is used to evaluate our proposed methods. The MDL drug data report (MDDR), maximum unbiased validation (MUV) and Directory of Useful Decoys (DUD) data sets were used for experiments and were represented by 2D fingerprints. Simulated virtual screening experiment show that the effectiveness of SQB method was significantly increased due to the role of representational power of molecular compounds in complex numbers forms compared to Tanimoto benchmark similarity measure.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27842479','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27842479"><span>RFDT: A Rotation Forest-based Predictor for Predicting Drug-Target Interactions Using Drug Structure and Protein Sequence Information.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Wang, Lei; You, Zhu-Hong; Chen, Xing; Yan, Xin; Liu, Gang; Zhang, Wei</p> <p>2018-01-01</p> <p>Identification of interaction between drugs and target proteins plays an important role in discovering new drug candidates. However, through the experimental method to identify the drug-target interactions remain to be extremely time-consuming, expensive and challenging even nowadays. Therefore, it is urgent to develop new computational methods to predict potential drugtarget interactions (DTI). In this article, a novel computational model is developed for predicting potential drug-target interactions under the theory that each drug-target interaction pair can be represented by the structural properties from drugs and evolutionary information derived from proteins. Specifically, the protein sequences are encoded as Position-Specific Scoring Matrix (PSSM) descriptor which contains information of biological evolutionary and the drug molecules are encoded as fingerprint feature vector which represents the existence of certain functional groups or fragments. Four benchmark datasets involving enzymes, ion channels, GPCRs and nuclear receptors, are independently used for establishing predictive models with Rotation Forest (RF) model. The proposed method achieved the prediction accuracy of 91.3%, 89.1%, 84.1% and 71.1% for four datasets respectively. In order to make our method more persuasive, we compared our classifier with the state-of-theart Support Vector Machine (SVM) classifier. We also compared the proposed method with other excellent methods. Experimental results demonstrate that the proposed method is effective in the prediction of DTI, and can provide assistance for new drug research and development. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27396369','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27396369"><span>A novel method for automated tracking and quantification of adult zebrafish behaviour during anxiety.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Nema, Shubham; Hasan, Whidul; Bhargava, Anamika; Bhargava, Yogesh</p> <p>2016-09-15</p> <p>Behavioural neuroscience relies on software driven methods for behavioural assessment, but the field lacks cost-effective, robust, open source software for behavioural analysis. Here we propose a novel method which we called as ZebraTrack. It includes cost-effective imaging setup for distraction-free behavioural acquisition, automated tracking using open-source ImageJ software and workflow for extraction of behavioural endpoints. Our ImageJ algorithm is capable of providing control to users at key steps while maintaining automation in tracking without the need for the installation of external plugins. We have validated this method by testing novelty induced anxiety behaviour in adult zebrafish. Our results, in agreement with established findings, showed that during state-anxiety, zebrafish showed reduced distance travelled, increased thigmotaxis and freezing events. Furthermore, we proposed a method to represent both spatial and temporal distribution of choice-based behaviour which is currently not possible to represent using simple videograms. ZebraTrack method is simple and economical, yet robust enough to give results comparable with those obtained from costly proprietary software like Ethovision XT. We have developed and validated a novel cost-effective method for behavioural analysis of adult zebrafish using open-source ImageJ software. Copyright © 2016 Elsevier B.V. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/21067712','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/21067712"><span>Texture feature extraction based on a uniformity estimation method for local brightness and structure in chest CT images.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Peng, Shao-Hu; Kim, Deok-Hwan; Lee, Seok-Lyong; Lim, Myung-Kwan</p> <p>2010-01-01</p> <p>Texture feature is one of most important feature analysis methods in the computer-aided diagnosis (CAD) systems for disease diagnosis. In this paper, we propose a Uniformity Estimation Method (UEM) for local brightness and structure to detect the pathological change in the chest CT images. Based on the characteristics of the chest CT images, we extract texture features by proposing an extension of rotation invariant LBP (ELBP(riu4)) and the gradient orientation difference so as to represent a uniform pattern of the brightness and structure in the image. The utilization of the ELBP(riu4) and the gradient orientation difference allows us to extract rotation invariant texture features in multiple directions. Beyond this, we propose to employ the integral image technique to speed up the texture feature computation of the spatial gray level dependent method (SGLDM). Copyright © 2010 Elsevier Ltd. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018JSV...418..144L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018JSV...418..144L"><span>Simulation of multivariate stationary stochastic processes using dimension-reduction representation methods</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Liu, Zhangjun; Liu, Zenghui; Peng, Yongbo</p> <p>2018-03-01</p> <p>In view of the Fourier-Stieltjes integral formula of multivariate stationary stochastic processes, a unified formulation accommodating spectral representation method (SRM) and proper orthogonal decomposition (POD) is deduced. By introducing random functions as constraints correlating the orthogonal random variables involved in the unified formulation, the dimension-reduction spectral representation method (DR-SRM) and the dimension-reduction proper orthogonal decomposition (DR-POD) are addressed. The proposed schemes are capable of representing the multivariate stationary stochastic process with a few elementary random variables, bypassing the challenges of high-dimensional random variables inherent in the conventional Monte Carlo methods. In order to accelerate the numerical simulation, the technique of Fast Fourier Transform (FFT) is integrated with the proposed schemes. For illustrative purposes, the simulation of horizontal wind velocity field along the deck of a large-span bridge is proceeded using the proposed methods containing 2 and 3 elementary random variables. Numerical simulation reveals the usefulness of the dimension-reduction representation methods.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27109931','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27109931"><span>Using phrases and document metadata to improve topic modeling of clinical reports.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Speier, William; Ong, Michael K; Arnold, Corey W</p> <p>2016-06-01</p> <p>Probabilistic topic models provide an unsupervised method for analyzing unstructured text, which have the potential to be integrated into clinical automatic summarization systems. Clinical documents are accompanied by metadata in a patient's medical history and frequently contains multiword concepts that can be valuable for accurately interpreting the included text. While existing methods have attempted to address these problems individually, we present a unified model for free-text clinical documents that integrates contextual patient- and document-level data, and discovers multi-word concepts. In the proposed model, phrases are represented by chained n-grams and a Dirichlet hyper-parameter is weighted by both document-level and patient-level context. This method and three other Latent Dirichlet allocation models were fit to a large collection of clinical reports. Examples of resulting topics demonstrate the results of the new model and the quality of the representations are evaluated using empirical log likelihood. The proposed model was able to create informative prior probabilities based on patient and document information, and captured phrases that represented various clinical concepts. The representation using the proposed model had a significantly higher empirical log likelihood than the compared methods. Integrating document metadata and capturing phrases in clinical text greatly improves the topic representation of clinical documents. The resulting clinically informative topics may effectively serve as the basis for an automatic summarization system for clinical reports. Copyright © 2016 Elsevier Inc. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25133202','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25133202"><span>Malware analysis using visualized image matrices.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Han, KyoungSoo; Kang, BooJoong; Im, Eul Gyu</p> <p>2014-01-01</p> <p>This paper proposes a novel malware visual analysis method that contains not only a visualization method to convert binary files into images, but also a similarity calculation method between these images. The proposed method generates RGB-colored pixels on image matrices using the opcode sequences extracted from malware samples and calculates the similarities for the image matrices. Particularly, our proposed methods are available for packed malware samples by applying them to the execution traces extracted through dynamic analysis. When the images are generated, we can reduce the overheads by extracting the opcode sequences only from the blocks that include the instructions related to staple behaviors such as functions and application programming interface (API) calls. In addition, we propose a technique that generates a representative image for each malware family in order to reduce the number of comparisons for the classification of unknown samples and the colored pixel information in the image matrices is used to calculate the similarities between the images. Our experimental results show that the image matrices of malware can effectively be used to classify malware families both statically and dynamically with accuracy of 0.9896 and 0.9732, respectively.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_5");'>5</a></li> <li><a href="#" onclick='return showDiv("page_6");'>6</a></li> <li class="active"><span>7</span></li> <li><a href="#" onclick='return showDiv("page_8");'>8</a></li> <li><a href="#" onclick='return showDiv("page_9");'>9</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_7 --> <div id="page_8" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_6");'>6</a></li> <li><a href="#" onclick='return showDiv("page_7");'>7</a></li> <li class="active"><span>8</span></li> <li><a href="#" onclick='return showDiv("page_9");'>9</a></li> <li><a href="#" onclick='return showDiv("page_10");'>10</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="141"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24663365','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24663365"><span>Improved full analytical polygon-based method using Fourier analysis of the three-dimensional affine transformation.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Pan, Yijie; Wang, Yongtian; Liu, Juan; Li, Xin; Jia, Jia</p> <p>2014-03-01</p> <p>Previous research [Appl. Opt.52, A290 (2013)] has revealed that Fourier analysis of three-dimensional affine transformation theory can be used to improve the computation speed of the traditional polygon-based method. In this paper, we continue our research and propose an improved full analytical polygon-based method developed upon this theory. Vertex vectors of primitive and arbitrary triangles and the pseudo-inverse matrix were used to obtain an affine transformation matrix representing the spatial relationship between the two triangles. With this relationship and the primitive spectrum, we analytically obtained the spectrum of the arbitrary triangle. This algorithm discards low-level angular dependent computations. In order to add diffusive reflection to each arbitrary surface, we also propose a whole matrix computation approach that takes advantage of the affine transformation matrix and uses matrix multiplication to calculate shifting parameters of similar sub-polygons. The proposed method improves hologram computation speed for the conventional full analytical approach. Optical experimental results are demonstrated which prove that the proposed method can effectively reconstruct three-dimensional scenes.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19910068931&hterms=millwater&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAuthor-Name%26N%3D0%26No%3D10%26Ntt%3Dmillwater','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19910068931&hterms=millwater&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAuthor-Name%26N%3D0%26No%3D10%26Ntt%3Dmillwater"><span>Probabilistic methods for rotordynamics analysis</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Wu, Y.-T.; Torng, T. Y.; Millwater, H. R.; Fossum, A. F.; Rheinfurth, M. H.</p> <p>1991-01-01</p> <p>This paper summarizes the development of the methods and a computer program to compute the probability of instability of dynamic systems that can be represented by a system of second-order ordinary linear differential equations. Two instability criteria based upon the eigenvalues or Routh-Hurwitz test functions are investigated. Computational methods based on a fast probability integration concept and an efficient adaptive importance sampling method are proposed to perform efficient probabilistic analysis. A numerical example is provided to demonstrate the methods.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4597325','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4597325"><span>A heuristic approach to determine an appropriate number of topics in topic modeling</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p></p> <p>2015-01-01</p> <p>Background Topic modelling is an active research field in machine learning. While mainly used to build models from unstructured textual data, it offers an effective means of data mining where samples represent documents, and different biological endpoints or omics data represent words. Latent Dirichlet Allocation (LDA) is the most commonly used topic modelling method across a wide number of technical fields. However, model development can be arduous and tedious, and requires burdensome and systematic sensitivity studies in order to find the best set of model parameters. Often, time-consuming subjective evaluations are needed to compare models. Currently, research has yielded no easy way to choose the proper number of topics in a model beyond a major iterative approach. Methods and results Based on analysis of variation of statistical perplexity during topic modelling, a heuristic approach is proposed in this study to estimate the most appropriate number of topics. Specifically, the rate of perplexity change (RPC) as a function of numbers of topics is proposed as a suitable selector. We test the stability and effectiveness of the proposed method for three markedly different types of grounded-truth datasets: Salmonella next generation sequencing, pharmacological side effects, and textual abstracts on computational biology and bioinformatics (TCBB) from PubMed. Conclusion The proposed RPC-based method is demonstrated to choose the best number of topics in three numerical experiments of widely different data types, and for databases of very different sizes. The work required was markedly less arduous than if full systematic sensitivity studies had been carried out with number of topics as a parameter. We understand that additional investigation is needed to substantiate the method's theoretical basis, and to establish its generalizability in terms of dataset characteristics. PMID:26424364</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3408835','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3408835"><span>Pediatric Brain Extraction Using Learning-based Meta-algorithm</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Shi, Feng; Wang, Li; Dai, Yakang; Gilmore, John H.; Lin, Weili; Shen, Dinggang</p> <p>2012-01-01</p> <p>Magnetic resonance imaging of pediatric brain provides valuable information for early brain development studies. Automated brain extraction is challenging due to the small brain size and dynamic change of tissue contrast in the developing brains. In this paper, we propose a novel Learning Algorithm for Brain Extraction and Labeling (LABEL) specially for the pediatric MR brain images. The idea is to perform multiple complementary brain extractions on a given testing image by using a meta-algorithm, including BET and BSE, where the parameters of each run of the meta-algorithm are effectively learned from the training data. Also, the representative subjects are selected as exemplars and used to guide brain extraction of new subjects in different age groups. We further develop a level-set based fusion method to combine multiple brain extractions together with a closed smooth surface for obtaining the final extraction. The proposed method has been extensively evaluated in subjects of three representative age groups, such as neonate (less than 2 months), infant (1–2 years), and child (5–18 years). Experimental results show that, with 45 subjects for training (15 neonates, 15 infant, and 15 children), the proposed method can produce more accurate brain extraction results on 246 testing subjects (75 neonates, 126 infants, and 45 children), i.e., at average Jaccard Index of 0.953, compared to those by BET (0.918), BSE (0.902), ROBEX (0.901), GCUT (0.856), and other fusion methods such as Majority Voting (0.919) and STAPLE (0.941). Along with the largely-improved computational efficiency, the proposed method demonstrates its ability of automated brain extraction for pediatric MR images in a large age range. PMID:22634859</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28129146','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28129146"><span>A Variance Distribution Model of Surface EMG Signals Based on Inverse Gamma Distribution.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Hayashi, Hideaki; Furui, Akira; Kurita, Yuichi; Tsuji, Toshio</p> <p>2017-11-01</p> <p>Objective: This paper describes the formulation of a surface electromyogram (EMG) model capable of representing the variance distribution of EMG signals. Methods: In the model, EMG signals are handled based on a Gaussian white noise process with a mean of zero for each variance value. EMG signal variance is taken as a random variable that follows inverse gamma distribution, allowing the representation of noise superimposed onto this variance. Variance distribution estimation based on marginal likelihood maximization is also outlined in this paper. The procedure can be approximated using rectified and smoothed EMG signals, thereby allowing the determination of distribution parameters in real time at low computational cost. Results: A simulation experiment was performed to evaluate the accuracy of distribution estimation using artificially generated EMG signals, with results demonstrating that the proposed model's accuracy is higher than that of maximum-likelihood-based estimation. Analysis of variance distribution using real EMG data also suggested a relationship between variance distribution and signal-dependent noise. Conclusion: The study reported here was conducted to examine the performance of a proposed surface EMG model capable of representing variance distribution and a related distribution parameter estimation method. Experiments using artificial and real EMG data demonstrated the validity of the model. Significance: Variance distribution estimated using the proposed model exhibits potential in the estimation of muscle force. Objective: This paper describes the formulation of a surface electromyogram (EMG) model capable of representing the variance distribution of EMG signals. Methods: In the model, EMG signals are handled based on a Gaussian white noise process with a mean of zero for each variance value. EMG signal variance is taken as a random variable that follows inverse gamma distribution, allowing the representation of noise superimposed onto this variance. Variance distribution estimation based on marginal likelihood maximization is also outlined in this paper. The procedure can be approximated using rectified and smoothed EMG signals, thereby allowing the determination of distribution parameters in real time at low computational cost. Results: A simulation experiment was performed to evaluate the accuracy of distribution estimation using artificially generated EMG signals, with results demonstrating that the proposed model's accuracy is higher than that of maximum-likelihood-based estimation. Analysis of variance distribution using real EMG data also suggested a relationship between variance distribution and signal-dependent noise. Conclusion: The study reported here was conducted to examine the performance of a proposed surface EMG model capable of representing variance distribution and a related distribution parameter estimation method. Experiments using artificial and real EMG data demonstrated the validity of the model. Significance: Variance distribution estimated using the proposed model exhibits potential in the estimation of muscle force.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26901203','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26901203"><span>A Rapid Coordinate Transformation Method Applied in Industrial Robot Calibration Based on Characteristic Line Coincidence.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Liu, Bailing; Zhang, Fumin; Qu, Xinghua; Shi, Xiaojia</p> <p>2016-02-18</p> <p>Coordinate transformation plays an indispensable role in industrial measurements, including photogrammetry, geodesy, laser 3-D measurement and robotics. The widely applied methods of coordinate transformation are generally based on solving the equations of point clouds. Despite the high accuracy, this might result in no solution due to the use of ill conditioned matrices. In this paper, a novel coordinate transformation method is proposed, not based on the equation solution but based on the geometric transformation. We construct characteristic lines to represent the coordinate systems. According to the space geometry relation, the characteristic line scan is made to coincide by a series of rotations and translations. The transformation matrix can be obtained using matrix transformation theory. Experiments are designed to compare the proposed method with other methods. The results show that the proposed method has the same high accuracy, but the operation is more convenient and flexible. A multi-sensor combined measurement system is also presented to improve the position accuracy of a robot with the calibration of the robot kinematic parameters. Experimental verification shows that the position accuracy of robot manipulator is improved by 45.8% with the proposed method and robot calibration.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4801615','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4801615"><span>A Rapid Coordinate Transformation Method Applied in Industrial Robot Calibration Based on Characteristic Line Coincidence</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Liu, Bailing; Zhang, Fumin; Qu, Xinghua; Shi, Xiaojia</p> <p>2016-01-01</p> <p>Coordinate transformation plays an indispensable role in industrial measurements, including photogrammetry, geodesy, laser 3-D measurement and robotics. The widely applied methods of coordinate transformation are generally based on solving the equations of point clouds. Despite the high accuracy, this might result in no solution due to the use of ill conditioned matrices. In this paper, a novel coordinate transformation method is proposed, not based on the equation solution but based on the geometric transformation. We construct characteristic lines to represent the coordinate systems. According to the space geometry relation, the characteristic line scan is made to coincide by a series of rotations and translations. The transformation matrix can be obtained using matrix transformation theory. Experiments are designed to compare the proposed method with other methods. The results show that the proposed method has the same high accuracy, but the operation is more convenient and flexible. A multi-sensor combined measurement system is also presented to improve the position accuracy of a robot with the calibration of the robot kinematic parameters. Experimental verification shows that the position accuracy of robot manipulator is improved by 45.8% with the proposed method and robot calibration. PMID:26901203</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016NatSR...618893B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016NatSR...618893B"><span>Change Point Detection in Correlation Networks</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Barnett, Ian; Onnela, Jukka-Pekka</p> <p>2016-01-01</p> <p>Many systems of interacting elements can be conceptualized as networks, where network nodes represent the elements and network ties represent interactions between the elements. In systems where the underlying network evolves, it is useful to determine the points in time where the network structure changes significantly as these may correspond to functional change points. We propose a method for detecting change points in correlation networks that, unlike previous change point detection methods designed for time series data, requires minimal distributional assumptions. We investigate the difficulty of change point detection near the boundaries of the time series in correlation networks and study the power of our method and competing methods through simulation. We also show the generalizable nature of the method by applying it to stock price data as well as fMRI data.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19810020070','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19810020070"><span>Incidences from modifications of the computational methods of the psophic index</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Francois, J.</p> <p>1981-01-01</p> <p>In France, the level of annoyance in areas around airports is represented by the psyphic index N. Various modifications were proposed in the method of calculating this indexing order to improve the index as an annoyance indicator. The quality of the modified N index as a prognosis index for annoyance caused by aircraft noise is included.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://files.eric.ed.gov/fulltext/EJ976577.pdf','ERIC'); return false;" href="http://files.eric.ed.gov/fulltext/EJ976577.pdf"><span>Sequential Pattern Analysis: Method and Application in Exploring How Students Develop Concept Maps</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Chiu, Chiung-Hui; Lin, Chien-Liang</p> <p>2012-01-01</p> <p>Concept mapping is a technique that represents knowledge in graphs. It has been widely adopted in science education and cognitive psychology to aid learning and assessment. To realize the sequential manner in which students develop concept maps, most research relies upon human-dependent, qualitative approaches. This article proposes a method for…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018SPIE10717E..0SB','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018SPIE10717E..0SB"><span>The description of two-photon Rabi oscillations in the path integral approach</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Biryukov, A. A.; Degtyareva, Ya. V.; Shleenkov, M. A.</p> <p>2018-04-01</p> <p>The probability of quantum transitions of a molecule between its states under the action of an electromagnetic field is represented as an integral over trajectories from a real alternating functional. A method is proposed for computing the integral using recurrence relations. The method is attached to describe the two-photon Rabi oscillations.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.fs.usda.gov/treesearch/pubs/21619','TREESEARCH'); return false;" href="https://www.fs.usda.gov/treesearch/pubs/21619"><span>Nondestructive assessment of timber bridges using a vibration-based method</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.fs.usda.gov/treesearch/">Treesearch</a></p> <p>Xiping Wang; James P. Wacker; Robert J. Ross; Brian K. Brashaw</p> <p>2005-01-01</p> <p>This paper describes an effort to develop a global dynamic testing technique for evaluating the overall stiffness of timber bridge superstructures. A forced vibration method was used to measure the natural frequency of single-span timber bridges in the laboratory and field. An analytical model based on simple beam theory was proposed to represent the relationship...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://files.eric.ed.gov/fulltext/ED571607.pdf','ERIC'); return false;" href="http://files.eric.ed.gov/fulltext/ED571607.pdf"><span>Method to Identify Deep Cases Based on Relationships between Nouns, Verbs, and Particles</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Ide, Daisuke; Kimura, Masaomi</p> <p>2016-01-01</p> <p>Deep cases representing the significant meaning of nouns in sentences play a crucial role in semantic analysis. However, a case tends to be manually identified because it requires understanding the meaning and relationships of words. To address this problem, we propose a method to predict deep cases by analyzing the relationship between nouns,…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.fs.usda.gov/treesearch/pubs/47788','TREESEARCH'); return false;" href="https://www.fs.usda.gov/treesearch/pubs/47788"><span>Decomposing biodiversity data using the Latent Dirichlet Allocation model, a probabilistic multivariate statistical method</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.fs.usda.gov/treesearch/">Treesearch</a></p> <p>Denis Valle; Benjamin Baiser; Christopher W. Woodall; Robin Chazdon; Jerome Chave</p> <p>2014-01-01</p> <p>We propose a novel multivariate method to analyse biodiversity data based on the Latent Dirichlet Allocation (LDA) model. LDA, a probabilistic model, reduces assemblages to sets of distinct component communities. It produces easily interpretable results, can represent abrupt and gradual changes in composition, accommodates missing data and allows for coherent estimates...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19800013455&hterms=sem&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D30%26Ntt%3Dsem','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19800013455&hterms=sem&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D30%26Ntt%3Dsem"><span>The SEM description of interaction of a transient electromagnetic wave with an object</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Pearson, L. W.; Wilton, D. R.</p> <p>1980-01-01</p> <p>The singularity expansion method (SEM), proposed as a means for determining and representing the transient surface current density induced on a scatterer by a transient electromagnetic wave is described. The resulting mathematical description of the transient surface current on the object is discussed. The data required to represent the electromagnetic scattering properties of a given object are examined. Experimental methods which were developed for the determination of the SEM description are discussed. The feasibility of characterizing the surface current induced on aircraft flying in proximity to a lightning stroke by way of SEM is examined.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/6391400','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/6391400"><span>Evaluation of epididymal function through specific protein on spermatozoa.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Del Río, A G; De Sánchez, L Z; Sirena, A</p> <p>1984-01-01</p> <p>Investigations were focused on the characterization of specific epididymal proteins on the human spermatozoa as a representative parameter for epididymal function. An easy and attainable method, suitable for investigators and clinical use, is proposed in this article.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4118208','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4118208"><span>Optimal back-extrapolation method for estimating plasma volume in humans using the indocyanine green dilution method</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p></p> <p>2014-01-01</p> <p>Background The indocyanine green dilution method is one of the methods available to estimate plasma volume, although some researchers have questioned the accuracy of this method. Methods We developed a new, physiologically based mathematical model of indocyanine green kinetics that more accurately represents indocyanine green kinetics during the first few minutes postinjection than what is assumed when using the traditional mono-exponential back-extrapolation method. The mathematical model is used to develop an optimal back-extrapolation method for estimating plasma volume based on simulated indocyanine green kinetics obtained from the physiological model. Results Results from a clinical study using the indocyanine green dilution method in 36 subjects with type 2 diabetes indicate that the estimated plasma volumes are considerably lower when using the traditional back-extrapolation method than when using the proposed back-extrapolation method (mean (standard deviation) plasma volume = 26.8 (5.4) mL/kg for the traditional method vs 35.1 (7.0) mL/kg for the proposed method). The results obtained using the proposed method are more consistent with previously reported plasma volume values. Conclusions Based on the more physiological representation of indocyanine green kinetics and greater consistency with previously reported plasma volume values, the new back-extrapolation method is proposed for use when estimating plasma volume using the indocyanine green dilution method. PMID:25052018</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2011SPIE.8285E..65H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2011SPIE.8285E..65H"><span>A humming retrieval system based on music fingerprint</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Han, Xingkai; Cao, Baiyu</p> <p>2011-10-01</p> <p>In this paper, we proposed an improved music information retrieval method utilizing the music fingerprint. The goal of this method is to represent the music with compressed musical information. Based on the selected MIDI files, which are generated automatically as our music target database, we evaluate the accuracy, effectiveness, and efficiency of this method. In this research we not only extract the feature sequence, which can represent the file effectively, from the query and melody database, but also make it possible for retrieving the results in an innovative way. We investigate on the influence of noise to the performance of our system. As experimental result shows, the retrieval accuracy arriving at up to91% without noise is pretty well</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2005OptEn..44e7002H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2005OptEn..44e7002H"><span>Component-based subspace linear discriminant analysis method for face recognition with one training sample</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Huang, Jian; Yuen, Pong C.; Chen, Wen-Sheng; Lai, J. H.</p> <p>2005-05-01</p> <p>Many face recognition algorithms/systems have been developed in the last decade and excellent performances have also been reported when there is a sufficient number of representative training samples. In many real-life applications such as passport identification, only one well-controlled frontal sample image is available for training. Under this situation, the performance of existing algorithms will degrade dramatically or may not even be implemented. We propose a component-based linear discriminant analysis (LDA) method to solve the one training sample problem. The basic idea of the proposed method is to construct local facial feature component bunches by moving each local feature region in four directions. In this way, we not only generate more samples with lower dimension than the original image, but also consider the face detection localization error while training. After that, we propose a subspace LDA method, which is tailor-made for a small number of training samples, for the local feature projection to maximize the discrimination power. Theoretical analysis and experiment results show that our proposed subspace LDA is efficient and overcomes the limitations in existing LDA methods. Finally, we combine the contributions of each local component bunch with a weighted combination scheme to draw the recognition decision. A FERET database is used for evaluating the proposed method and results are encouraging.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4327050','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4327050"><span>Weighted Geometric Dilution of Precision Calculations with Matrix Multiplication</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Chen, Chien-Sheng</p> <p>2015-01-01</p> <p>To enhance the performance of location estimation in wireless positioning systems, the geometric dilution of precision (GDOP) is widely used as a criterion for selecting measurement units. Since GDOP represents the geometric effect on the relationship between measurement error and positioning determination error, the smallest GDOP of the measurement unit subset is usually chosen for positioning. The conventional GDOP calculation using matrix inversion method requires many operations. Because more and more measurement units can be chosen nowadays, an efficient calculation should be designed to decrease the complexity. Since the performance of each measurement unit is different, the weighted GDOP (WGDOP), instead of GDOP, is used to select the measurement units to improve the accuracy of location. To calculate WGDOP effectively and efficiently, the closed-form solution for WGDOP calculation is proposed when more than four measurements are available. In this paper, an efficient WGDOP calculation method applying matrix multiplication that is easy for hardware implementation is proposed. In addition, the proposed method can be used when more than exactly four measurements are available. Even when using all-in-view method for positioning, the proposed method still can reduce the computational overhead. The proposed WGDOP methods with less computation are compatible with global positioning system (GPS), wireless sensor networks (WSN) and cellular communication systems. PMID:25569755</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_6");'>6</a></li> <li><a href="#" onclick='return showDiv("page_7");'>7</a></li> <li class="active"><span>8</span></li> <li><a href="#" onclick='return showDiv("page_9");'>9</a></li> <li><a href="#" onclick='return showDiv("page_10");'>10</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_8 --> <div id="page_9" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_7");'>7</a></li> <li><a href="#" onclick='return showDiv("page_8");'>8</a></li> <li class="active"><span>9</span></li> <li><a href="#" onclick='return showDiv("page_10");'>10</a></li> <li><a href="#" onclick='return showDiv("page_11");'>11</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="161"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/17051794','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/17051794"><span>Characterization of background concentrations of contaminants using a mixture of normal distributions.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Qian, Song S; Lyons, Regan E</p> <p>2006-10-01</p> <p>We present a Bayesian approach for characterizing background contaminant concentration distributions using data from sites that may have been contaminated. Our method, focused on estimation, resolves several technical problems of the existing methods sanctioned by the U.S. Environmental Protection Agency (USEPA) (a hypothesis testing based method), resulting in a simple and quick procedure for estimating background contaminant concentrations. The proposed Bayesian method is applied to two data sets from a federal facility regulated under the Resource Conservation and Restoration Act. The results are compared to background distributions identified using existing methods recommended by the USEPA. The two data sets represent low and moderate levels of censorship in the data. Although an unbiased estimator is elusive, we show that the proposed Bayesian estimation method will have a smaller bias than the EPA recommended method.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/19068433','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/19068433"><span>Learning situation models in a smart home.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Brdiczka, Oliver; Crowley, James L; Reignier, Patrick</p> <p>2009-02-01</p> <p>This paper addresses the problem of learning situation models for providing context-aware services. Context for modeling human behavior in a smart environment is represented by a situation model describing environment, users, and their activities. A framework for acquiring and evolving different layers of a situation model in a smart environment is proposed. Different learning methods are presented as part of this framework: role detection per entity, unsupervised extraction of situations from multimodal data, supervised learning of situation representations, and evolution of a predefined situation model with feedback. The situation model serves as frame and support for the different methods, permitting to stay in an intuitive declarative framework. The proposed methods have been integrated into a whole system for smart home environment. The implementation is detailed, and two evaluations are conducted in the smart home environment. The obtained results validate the proposed approach.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015SPIE.9414E..1EN','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015SPIE.9414E..1EN"><span>A content-based image retrieval method for optical colonoscopy images based on image recognition techniques</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Nosato, Hirokazu; Sakanashi, Hidenori; Takahashi, Eiichi; Murakawa, Masahiro</p> <p>2015-03-01</p> <p>This paper proposes a content-based image retrieval method for optical colonoscopy images that can find images similar to ones being diagnosed. Optical colonoscopy is a method of direct observation for colons and rectums to diagnose bowel diseases. It is the most common procedure for screening, surveillance and treatment. However, diagnostic accuracy for intractable inflammatory bowel diseases, such as ulcerative colitis (UC), is highly dependent on the experience and knowledge of the medical doctor, because there is considerable variety in the appearances of colonic mucosa within inflammations with UC. In order to solve this issue, this paper proposes a content-based image retrieval method based on image recognition techniques. The proposed retrieval method can find similar images from a database of images diagnosed as UC, and can potentially furnish the medical records associated with the retrieved images to assist the UC diagnosis. Within the proposed method, color histogram features and higher order local auto-correlation (HLAC) features are adopted to represent the color information and geometrical information of optical colonoscopy images, respectively. Moreover, considering various characteristics of UC colonoscopy images, such as vascular patterns and the roughness of the colonic mucosa, we also propose an image enhancement method to highlight the appearances of colonic mucosa in UC. In an experiment using 161 UC images from 32 patients, we demonstrate that our method improves the accuracy of retrieving similar UC images.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26979127','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26979127"><span>Estimating repetitive spatiotemporal patterns from resting-state brain activity data.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Takeda, Yusuke; Hiroe, Nobuo; Yamashita, Okito; Sato, Masa-Aki</p> <p>2016-06-01</p> <p>Repetitive spatiotemporal patterns in spontaneous brain activities have been widely examined in non-human studies. These studies have reported that such patterns reflect past experiences embedded in neural circuits. In human magnetoencephalography (MEG) and electroencephalography (EEG) studies, however, spatiotemporal patterns in resting-state brain activities have not been extensively examined. This is because estimating spatiotemporal patterns from resting-state MEG/EEG data is difficult due to their unknown onsets. Here, we propose a method to estimate repetitive spatiotemporal patterns from resting-state brain activity data, including MEG/EEG. Without the information of onsets, the proposed method can estimate several spatiotemporal patterns, even if they are overlapping. We verified the performance of the method by detailed simulation tests. Furthermore, we examined whether the proposed method could estimate the visual evoked magnetic fields (VEFs) without using stimulus onset information. The proposed method successfully detected the stimulus onsets and estimated the VEFs, implying the applicability of this method to real MEG data. The proposed method was applied to resting-state functional magnetic resonance imaging (fMRI) data and MEG data. The results revealed informative spatiotemporal patterns representing consecutive brain activities that dynamically change with time. Using this method, it is possible to reveal discrete events spontaneously occurring in our brains, such as memory retrieval. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4714801','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4714801"><span>MATRIX DISCRIMINANT ANALYSIS WITH APPLICATION TO COLORIMETRIC SENSOR ARRAY DATA</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Suslick, Kenneth S.</p> <p>2014-01-01</p> <p>With the rapid development of nano-technology, a “colorimetric sensor array” (CSA) which is referred to as an optical electronic nose has been developed for the identification of toxicants. Unlike traditional sensors which rely on a single chemical interaction, CSA can measure multiple chemical interactions by using chemo-responsive dyes. The color changes of the chemo-responsive dyes are recorded before and after exposure to toxicants and serve as a template for classification. The color changes are digitalized in the form of a matrix with rows representing dye effects and columns representing the spectrum of colors. Thus, matrix-classification methods are highly desirable. In this article, we develop a novel classification method, matrix discriminant analysis (MDA), which is a generalization of linear discriminant analysis (LDA) for the data in matrix form. By incorporating the intrinsic matrix-structure of the data in discriminant analysis, the proposed method can improve CSA’s sensitivity and more importantly, specificity. A penalized MDA method, PMDA, is also introduced to further incorporate sparsity structure in discriminant function. Numerical studies suggest that the proposed MDA and PMDA methods outperform LDA and other competing discriminant methods for matrix predictors. The asymptotic consistency of MDA is also established. R code and data are available online as supplementary material. PMID:26783371</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2010IEITI..93..569K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2010IEITI..93..569K"><span>Reasoning on the Self-Organizing Incremental Associative Memory for Online Robot Path Planning</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kawewong, Aram; Honda, Yutaro; Tsuboyama, Manabu; Hasegawa, Osamu</p> <p></p> <p>Robot path-planning is one of the important issues in robotic navigation. This paper presents a novel robot path-planning approach based on the associative memory using Self-Organizing Incremental Neural Networks (SOINN). By the proposed method, an environment is first autonomously divided into a set of path-fragments by junctions. Each fragment is represented by a sequence of preliminarily generated common patterns (CPs). In an online manner, a robot regards the current path as the associative path-fragments, each connected by junctions. The reasoning technique is additionally proposed for decision making at each junction to speed up the exploration time. Distinct from other methods, our method does not ignore the important information about the regions between junctions (path-fragments). The resultant number of path-fragments is also less than other method. Evaluation is done via Webots physical 3D-simulated and real robot experiments, where only distance sensors are available. Results show that our method can represent the environment effectively; it enables the robot to solve the goal-oriented navigation problem in only one episode, which is actually less than that necessary for most of the Reinforcement Learning (RL) based methods. The running time is proved finite and scales well with the environment. The resultant number of path-fragments matches well to the environment.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4647694','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4647694"><span>Matching a Distribution by Matching Quantiles Estimation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Sgouropoulos, Nikolaos; Yao, Qiwei; Yastremiz, Claudia</p> <p>2015-01-01</p> <p>Motivated by the problem of selecting representative portfolios for backtesting counterparty credit risks, we propose a matching quantiles estimation (MQE) method for matching a target distribution by that of a linear combination of a set of random variables. An iterative procedure based on the ordinary least-squares estimation (OLS) is proposed to compute MQE. MQE can be easily modified by adding a LASSO penalty term if a sparse representation is desired, or by restricting the matching within certain range of quantiles to match a part of the target distribution. The convergence of the algorithm and the asymptotic properties of the estimation, both with or without LASSO, are established. A measure and an associated statistical test are proposed to assess the goodness-of-match. The finite sample properties are illustrated by simulation. An application in selecting a counterparty representative portfolio with a real dataset is reported. The proposed MQE also finds applications in portfolio tracking, which demonstrates the usefulness of combining MQE with LASSO. PMID:26692592</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2006SPIE.6077E..0XE','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2006SPIE.6077E..0XE"><span>Spatial detection of tv channel logos as outliers from the content</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Ekin, Ahmet; Braspenning, Ralph</p> <p>2006-01-01</p> <p>This paper proposes a purely image-based TV channel logo detection algorithm that can detect logos independently from their motion and transparency features. The proposed algorithm can robustly detect any type of logos, such as transparent and animated, without requiring any temporal constraints whereas known methods have to wait for the occurrence of large motion in the scene and assume stationary logos. The algorithm models logo pixels as outliers from the actual scene content that is represented by multiple 3-D histograms in the YC BC R space. We use four scene histograms corresponding to each of the four corners because the content characteristics change from one image corner to another. A further novelty of the proposed algorithm is that we define image corners and the areas where we compute the scene histograms by a cinematic technique called Golden Section Rule that is used by professionals. The robustness of the proposed algorithm is demonstrated over a dataset of representative TV content.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017JEI....26f1611B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017JEI....26f1611B"><span>Region growing using superpixels with learned shape prior</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Borovec, Jiří; Kybic, Jan; Sugimoto, Akihiro</p> <p>2017-11-01</p> <p>Region growing is a classical image segmentation method based on hierarchical region aggregation using local similarity rules. Our proposed method differs from classical region growing in three important aspects. First, it works on the level of superpixels instead of pixels, which leads to a substantial speed-up. Second, our method uses learned statistical shape properties that encourage plausible shapes. In particular, we use ray features to describe the object boundary. Third, our method can segment multiple objects and ensure that the segmentations do not overlap. The problem is represented as an energy minimization and is solved either greedily or iteratively using graph cuts. We demonstrate the performance of the proposed method and compare it with alternative approaches on the task of segmenting individual eggs in microscopy images of Drosophila ovaries.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2010TJSAI..25..174T','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2010TJSAI..25..174T"><span>Text Summarization Model based on Facility Location Problem</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Takamura, Hiroya; Okumura, Manabu</p> <p></p> <p>e propose a novel multi-document generic summarization model based on the budgeted median problem, which is a facility location problem. The summarization method based on our model is an extractive method, which selects sentences from the given document cluster and generates a summary. Each sentence in the document cluster will be assigned to one of the selected sentences, where the former sentece is supposed to be represented by the latter. Our method selects sentences to generate a summary that yields a good sentence assignment and hence covers the whole content of the document cluster. An advantage of this method is that it can incorporate asymmetric relations between sentences such as textual entailment. Through experiments, we showed that the proposed method yields good summaries on the dataset of DUC'04.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3795997','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3795997"><span>Joint Segmentation of Anatomical and Functional Images: Applications in Quantification of Lesions from PET, PET-CT, MRI-PET, and MRI-PET-CT Images</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Bagci, Ulas; Udupa, Jayaram K.; Mendhiratta, Neil; Foster, Brent; Xu, Ziyue; Yao, Jianhua; Chen, Xinjian; Mollura, Daniel J.</p> <p>2013-01-01</p> <p>We present a novel method for the joint segmentation of anatomical and functional images. Our proposed methodology unifies the domains of anatomical and functional images, represents them in a product lattice, and performs simultaneous delineation of regions based on random walk image segmentation. Furthermore, we also propose a simple yet effective object/background seed localization method to make the proposed segmentation process fully automatic. Our study uses PET, PET-CT, MRI-PET, and fused MRI-PET-CT scans (77 studies in all) from 56 patients who had various lesions in different body regions. We validated the effectiveness of the proposed method on different PET phantoms as well as on clinical images with respect to the ground truth segmentation provided by clinicians. Experimental results indicate that the presented method is superior to threshold and Bayesian methods commonly used in PET image segmentation, is more accurate and robust compared to the other PET-CT segmentation methods recently published in the literature, and also it is general in the sense of simultaneously segmenting multiple scans in real-time with high accuracy needed in routine clinical use. PMID:23837967</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/16187409','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/16187409"><span>Parameters selection in gene selection using Gaussian kernel support vector machines by genetic algorithm.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Mao, Yong; Zhou, Xiao-Bo; Pi, Dao-Ying; Sun, You-Xian; Wong, Stephen T C</p> <p>2005-10-01</p> <p>In microarray-based cancer classification, gene selection is an important issue owing to the large number of variables and small number of samples as well as its non-linearity. It is difficult to get satisfying results by using conventional linear statistical methods. Recursive feature elimination based on support vector machine (SVM RFE) is an effective algorithm for gene selection and cancer classification, which are integrated into a consistent framework. In this paper, we propose a new method to select parameters of the aforementioned algorithm implemented with Gaussian kernel SVMs as better alternatives to the common practice of selecting the apparently best parameters by using a genetic algorithm to search for a couple of optimal parameter. Fast implementation issues for this method are also discussed for pragmatic reasons. The proposed method was tested on two representative hereditary breast cancer and acute leukaemia datasets. The experimental results indicate that the proposed method performs well in selecting genes and achieves high classification accuracies with these genes.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2010EJASP2010..156O','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2010EJASP2010..156O"><span>A Robust Image Watermarking in the Joint Time-Frequency Domain</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Öztürk, Mahmut; Akan, Aydın; Çekiç, Yalçın</p> <p>2010-12-01</p> <p>With the rapid development of computers and internet applications, copyright protection of multimedia data has become an important problem. Watermarking techniques are proposed as a solution to copyright protection of digital media files. In this paper, a new, robust, and high-capacity watermarking method that is based on spatiofrequency (SF) representation is presented. We use the discrete evolutionary transform (DET) calculated by the Gabor expansion to represent an image in the joint SF domain. The watermark is embedded onto selected coefficients in the joint SF domain. Hence, by combining the advantages of spatial and spectral domain watermarking methods, a robust, invisible, secure, and high-capacity watermarking method is presented. A correlation-based detector is also proposed to detect and extract any possible watermarks on an image. The proposed watermarking method was tested on some commonly used test images under different signal processing attacks like additive noise, Wiener and Median filtering, JPEG compression, rotation, and cropping. Simulation results show that our method is robust against all of the attacks.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018AIPC.1955d0107L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018AIPC.1955d0107L"><span>Blind compressed sensing image reconstruction based on alternating direction method</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Liu, Qinan; Guo, Shuxu</p> <p>2018-04-01</p> <p>In order to solve the problem of how to reconstruct the original image under the condition of unknown sparse basis, this paper proposes an image reconstruction method based on blind compressed sensing model. In this model, the image signal is regarded as the product of a sparse coefficient matrix and a dictionary matrix. Based on the existing blind compressed sensing theory, the optimal solution is solved by the alternative minimization method. The proposed method solves the problem that the sparse basis in compressed sensing is difficult to represent, which restrains the noise and improves the quality of reconstructed image. This method ensures that the blind compressed sensing theory has a unique solution and can recover the reconstructed original image signal from a complex environment with a stronger self-adaptability. The experimental results show that the image reconstruction algorithm based on blind compressed sensing proposed in this paper can recover high quality image signals under the condition of under-sampling.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/21499274-hybrid-method-jm-ecs-combining-matrix-exterior-complex-scaling-methods-scattering-calculations','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/21499274-hybrid-method-jm-ecs-combining-matrix-exterior-complex-scaling-methods-scattering-calculations"><span>Hybrid method (JM-ECS) combining the J-matrix and exterior complex scaling methods for scattering calculations</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Vanroose, W.; Broeckhove, J.; Arickx, F.</p> <p></p> <p>The paper proposes a hybrid method for calculating scattering processes. It combines the J-matrix method with exterior complex scaling and an absorbing boundary condition. The wave function is represented as a finite sum of oscillator eigenstates in the inner region, and it is discretized on a grid in the outer region. The method is validated for a one- and a two-dimensional model with partial wave equations and a calculation of p-shell nuclear scattering with semirealistic interactions.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1988QuEle..18.1372A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1988QuEle..18.1372A"><span>INTERNATIONAL CONFERENCE ON SEMICONDUCTOR INJECTION LASERS SELCO-87: Method for calculation of electrical and optical properties of laser active media</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Aleksandrov, D. G.; Filipov, F. I.</p> <p>1988-11-01</p> <p>A method is proposed for calculation of the electron band structure of multicomponent semiconductor solid solutions. Use is made of virtual atomic orbitals formed from real orbitals. The method represents essentially an approximation of a multicomponent solid solution by a binary one. The matrix elements of the Hamiltonian are obtained in the methods of linear combinations of atomic and bound orbitals. Some approximations used in these methods are described.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015PhDT........24Y','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015PhDT........24Y"><span>Network selection, Information filtering and Scalable computation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Ye, Changqing</p> <p></p> <p>This dissertation explores two application scenarios of sparsity pursuit method on large scale data sets. The first scenario is classification and regression in analyzing high dimensional structured data, where predictors corresponds to nodes of a given directed graph. This arises in, for instance, identification of disease genes for the Parkinson's diseases from a network of candidate genes. In such a situation, directed graph describes dependencies among the genes, where direction of edges represent certain causal effects. Key to high-dimensional structured classification and regression is how to utilize dependencies among predictors as specified by directions of the graph. In this dissertation, we develop a novel method that fully takes into account such dependencies formulated through certain nonlinear constraints. We apply the proposed method to two applications, feature selection in large margin binary classification and in linear regression. We implement the proposed method through difference convex programming for the cost function and constraints. Finally, theoretical and numerical analyses suggest that the proposed method achieves the desired objectives. An application to disease gene identification is presented. The second application scenario is personalized information filtering which extracts the information specifically relevant to a user, predicting his/her preference over a large number of items, based on the opinions of users who think alike or its content. This problem is cast into the framework of regression and classification, where we introduce novel partial latent models to integrate additional user-specific and content-specific predictors, for higher predictive accuracy. In particular, we factorize a user-over-item preference matrix into a product of two matrices, each representing a user's preference and an item preference by users. Then we propose a likelihood method to seek a sparsest latent factorization, from a class of over-complete factorizations, possibly with a high percentage of missing values. This promotes additional sparsity beyond rank reduction. Computationally, we design methods based on a ``decomposition and combination'' strategy, to break large-scale optimization into many small subproblems to solve in a recursive and parallel manner. On this basis, we implement the proposed methods through multi-platform shared-memory parallel programming, and through Mahout, a library for scalable machine learning and data mining, for mapReduce computation. For example, our methods are scalable to a dataset consisting of three billions of observations on a single machine with sufficient memory, having good timings. Both theoretical and numerical investigations show that the proposed methods exhibit significant improvement in accuracy over state-of-the-art scalable methods.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/20427164','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/20427164"><span>Fusion of fuzzy statistical distributions for classification of thyroid ultrasound patterns.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Iakovidis, Dimitris K; Keramidas, Eystratios G; Maroulis, Dimitris</p> <p>2010-09-01</p> <p>This paper proposes a novel approach for thyroid ultrasound pattern representation. Considering that texture and echogenicity are correlated with thyroid malignancy, the proposed approach encodes these sonographic features via a noise-resistant representation. This representation is suitable for the discrimination of nodules of high malignancy risk from normal thyroid parenchyma. The material used in this study includes a total of 250 thyroid ultrasound patterns obtained from 75 patients in Greece. The patterns are represented by fused vectors of fuzzy features. Ultrasound texture is represented by fuzzy local binary patterns, whereas echogenicity is represented by fuzzy intensity histograms. The encoded thyroid ultrasound patterns are discriminated by support vector classifiers. The proposed approach was comprehensively evaluated using receiver operating characteristics (ROCs). The results show that the proposed fusion scheme outperforms previous thyroid ultrasound pattern representation methods proposed in the literature. The best classification accuracy was obtained with a polynomial kernel support vector machine, and reached 97.5% as estimated by the area under the ROC curve. The fusion of fuzzy local binary patterns and fuzzy grey-level histogram features is more effective than the state of the art approaches for the representation of thyroid ultrasound patterns and can be effectively utilized for the detection of nodules of high malignancy risk in the context of an intelligent medical system. Copyright (c) 2010 Elsevier B.V. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/19068438','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/19068438"><span>Local synchronization of a complex network model.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Yu, Wenwu; Cao, Jinde; Chen, Guanrong; Lü, Jinhu; Han, Jian; Wei, Wei</p> <p>2009-02-01</p> <p>This paper introduces a novel complex network model to evaluate the reputation of virtual organizations. By using the Lyapunov function and linear matrix inequality approaches, the local synchronization of the proposed model is further investigated. Here, the local synchronization is defined by the inner synchronization within a group which does not mean the synchronization between different groups. Moreover, several sufficient conditions are derived to ensure the local synchronization of the proposed network model. Finally, several representative examples are given to show the effectiveness of the proposed methods and theories.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26270915','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26270915"><span>Multilinear Graph Embedding: Representation and Regularization for Images.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Chen, Yi-Lei; Hsu, Chiou-Ting</p> <p>2014-02-01</p> <p>Given a set of images, finding a compact and discriminative representation is still a big challenge especially when multiple latent factors are hidden in the way of data generation. To represent multifactor images, although multilinear models are widely used to parameterize the data, most methods are based on high-order singular value decomposition (HOSVD), which preserves global statistics but interprets local variations inadequately. To this end, we propose a novel method, called multilinear graph embedding (MGE), as well as its kernelization MKGE to leverage the manifold learning techniques into multilinear models. Our method theoretically links the linear, nonlinear, and multilinear dimensionality reduction. We also show that the supervised MGE encodes informative image priors for image regularization, provided that an image is represented as a high-order tensor. From our experiments on face and gait recognition, the superior performance demonstrates that MGE better represents multifactor images than classic methods, including HOSVD and its variants. In addition, the significant improvement in image (or tensor) completion validates the potential of MGE for image regularization.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_7");'>7</a></li> <li><a href="#" onclick='return showDiv("page_8");'>8</a></li> <li class="active"><span>9</span></li> <li><a href="#" onclick='return showDiv("page_10");'>10</a></li> <li><a href="#" onclick='return showDiv("page_11");'>11</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_9 --> <div id="page_10" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_8");'>8</a></li> <li><a href="#" onclick='return showDiv("page_9");'>9</a></li> <li class="active"><span>10</span></li> <li><a href="#" onclick='return showDiv("page_11");'>11</a></li> <li><a href="#" onclick='return showDiv("page_12");'>12</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="181"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/17825008','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/17825008"><span>A segmentation/clustering model for the analysis of array CGH data.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Picard, F; Robin, S; Lebarbier, E; Daudin, J-J</p> <p>2007-09-01</p> <p>Microarray-CGH (comparative genomic hybridization) experiments are used to detect and map chromosomal imbalances. A CGH profile can be viewed as a succession of segments that represent homogeneous regions in the genome whose representative sequences share the same relative copy number on average. Segmentation methods constitute a natural framework for the analysis, but they do not provide a biological status for the detected segments. We propose a new model for this segmentation/clustering problem, combining a segmentation model with a mixture model. We present a new hybrid algorithm called dynamic programming-expectation maximization (DP-EM) to estimate the parameters of the model by maximum likelihood. This algorithm combines DP and the EM algorithm. We also propose a model selection heuristic to select the number of clusters and the number of segments. An example of our procedure is presented, based on publicly available data sets. We compare our method to segmentation methods and to hidden Markov models, and we show that the new segmentation/clustering model is a promising alternative that can be applied in the more general context of signal processing.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.fs.usda.gov/treesearch/pubs/20979','TREESEARCH'); return false;" href="https://www.fs.usda.gov/treesearch/pubs/20979"><span>Nondestructive assessment of single-span timber bridges using a vibration- based method</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.fs.usda.gov/treesearch/">Treesearch</a></p> <p>Xiping Wang; James P. Wacker; Angus M. Morison; John W. Forsman; John R. Erickson; Robert J. Ross</p> <p>2005-01-01</p> <p>This paper describes an effort to develop a global dynamic testing technique for evaluating the overall stiffness of timber bridge superstructures. A forced vibration method was used to measure the natural frequency of single-span timber bridges in the laboratory and field. An analytical model based on simple beam theory was proposed to represent the relationship...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=Time+AND+series+AND+design+AND+approach&pg=3&id=EJ927514','ERIC'); return false;" href="https://eric.ed.gov/?q=Time+AND+series+AND+design+AND+approach&pg=3&id=EJ927514"><span>A Generalized Least Squares Regression Approach for Computing Effect Sizes in Single-Case Research: Application Examples</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Maggin, Daniel M.; Swaminathan, Hariharan; Rogers, Helen J.; O'Keeffe, Breda V.; Sugai, George; Horner, Robert H.</p> <p>2011-01-01</p> <p>A new method for deriving effect sizes from single-case designs is proposed. The strategy is applicable to small-sample time-series data with autoregressive errors. The method uses Generalized Least Squares (GLS) to model the autocorrelation of the data and estimate regression parameters to produce an effect size that represents the magnitude of…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4131459','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4131459"><span>Abnormal Image Detection in Endoscopy Videos Using a Filter Bank and Local Binary Patterns</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Nawarathna, Ruwan; Oh, JungHwan; Muthukudage, Jayantha; Tavanapong, Wallapak; Wong, Johnny; de Groen, Piet C.; Tang, Shou Jiang</p> <p>2014-01-01</p> <p>Finding mucosal abnormalities (e.g., erythema, blood, ulcer, erosion, and polyp) is one of the most essential tasks during endoscopy video review. Since these abnormalities typically appear in a small number of frames (around 5% of the total frame number), automated detection of frames with an abnormality can save physician’s time significantly. In this paper, we propose a new multi-texture analysis method that effectively discerns images showing mucosal abnormalities from the ones without any abnormality since most abnormalities in endoscopy images have textures that are clearly distinguishable from normal textures using an advanced image texture analysis method. The method uses a “texton histogram” of an image block as features. The histogram captures the distribution of different “textons” representing various textures in an endoscopy image. The textons are representative response vectors of an application of a combination of Leung and Malik (LM) filter bank (i.e., a set of image filters) and a set of Local Binary Patterns on the image. Our experimental results indicate that the proposed method achieves 92% recall and 91.8% specificity on wireless capsule endoscopy (WCE) images and 91% recall and 90.8% specificity on colonoscopy images. PMID:25132723</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018JMOp...65..367K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018JMOp...65..367K"><span>Face recognition based on symmetrical virtual image and original training image</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Ke, Jingcheng; Peng, Yali; Liu, Shigang; Li, Jun; Pei, Zhao</p> <p>2018-02-01</p> <p>In face representation-based classification methods, we are able to obtain high recognition rate if a face has enough available training samples. However, in practical applications, we only have limited training samples to use. In order to obtain enough training samples, many methods simultaneously use the original training samples and corresponding virtual samples to strengthen the ability of representing the test sample. One is directly using the original training samples and corresponding mirror samples to recognize the test sample. However, when the test sample is nearly symmetrical while the original training samples are not, the integration of the original training and mirror samples might not well represent the test samples. To tackle the above-mentioned problem, in this paper, we propose a novel method to obtain a kind of virtual samples which are generated by averaging the original training samples and corresponding mirror samples. Then, the original training samples and the virtual samples are integrated to recognize the test sample. Experimental results on five face databases show that the proposed method is able to partly overcome the challenges of the various poses, facial expressions and illuminations of original face image.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/19423602','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/19423602"><span>In itinere strategic environmental assessment of an integrated provincial waste system.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Federico, Giovanna; Rizzo, Gianfranco; Traverso, Marzia</p> <p>2009-06-01</p> <p>In the paper, the practical problem of analysing in an integrated way the performance of provincial waste systems is approached, in the framework of the Strategic Environmental Assessment (SEA). In particular, the in itinere phase of SEA is analysed herein. After separating out a proper group of ambits, to which the waste system is supposed to determine relevant impacts, pertinent sets of single indicators are proposed. Through the adoption of such indicators the time trend of the system is investigated, and the suitability of each indicator is critically revised. The structure of the evaluation scheme, which is essentially based on the use of ambit issues and analytical indicators, calls for the application of the method of the Dashboard of Sustainability for the integrated evaluation of the whole system. The suitability of this method is shown through the paper, together with the possibility of a comparative analysis of different scenarios of interventions. Of course, the reliability of the proposed method strongly relies on the availability of a detailed set of territorial data. The method appears to represent a useful tool for public administration in the process of optimizing the policy actions aimed at minimizing the increasing problem represented by waste production in urban areas.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27295691','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27295691"><span>A Novel Locally Linear KNN Method With Applications to Visual Recognition.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Liu, Qingfeng; Liu, Chengjun</p> <p>2017-09-01</p> <p>A locally linear K Nearest Neighbor (LLK) method is presented in this paper with applications to robust visual recognition. Specifically, the concept of an ideal representation is first presented, which improves upon the traditional sparse representation in many ways. The objective function based on a host of criteria for sparsity, locality, and reconstruction is then optimized to derive a novel representation, which is an approximation to the ideal representation. The novel representation is further processed by two classifiers, namely, an LLK-based classifier and a locally linear nearest mean-based classifier, for visual recognition. The proposed classifiers are shown to connect to the Bayes decision rule for minimum error. Additional new theoretical analysis is presented, such as the nonnegative constraint, the group regularization, and the computational efficiency of the proposed LLK method. New methods such as a shifted power transformation for improving reliability, a coefficients' truncating method for enhancing generalization, and an improved marginal Fisher analysis method for feature extraction are proposed to further improve visual recognition performance. Extensive experiments are implemented to evaluate the proposed LLK method for robust visual recognition. In particular, eight representative data sets are applied for assessing the performance of the LLK method for various visual recognition applications, such as action recognition, scene recognition, object recognition, and face recognition.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2864926','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2864926"><span>Multi Groups Cooperation based Symbiotic Evolution for TSK-type Neuro-Fuzzy Systems Design</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Cheng, Yi-Chang; Hsu, Yung-Chi</p> <p>2010-01-01</p> <p>In this paper, a TSK-type neuro-fuzzy system with multi groups cooperation based symbiotic evolution method (TNFS-MGCSE) is proposed. The TNFS-MGCSE is developed from symbiotic evolution. The symbiotic evolution is different from traditional GAs (genetic algorithms) that each chromosome in symbiotic evolution represents a rule of fuzzy model. The MGCSE is different from the traditional symbiotic evolution; with a population in MGCSE is divided to several groups. Each group formed by a set of chromosomes represents a fuzzy rule and cooperate with other groups to generate the better chromosomes by using the proposed cooperation based crossover strategy (CCS). In this paper, the proposed TNFS-MGCSE is used to evaluate by numerical examples (Mackey-Glass chaotic time series and sunspot number forecasting). The performance of the TNFS-MGCSE achieves excellently with other existing models in the simulations. PMID:21709856</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29630616','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29630616"><span>Forecasting new product diffusion using both patent citation and web search traffic.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Lee, Won Sang; Choi, Hyo Shin; Sohn, So Young</p> <p>2018-01-01</p> <p>Accurate demand forecasting for new technology products is a key factor in the success of a business. We propose a way to forecasting a new product's diffusion through technology diffusion and interest diffusion. Technology diffusion and interest diffusion are measured by the volume of patent citations and web search traffic, respectively. We apply the proposed method to forecast the sales of hybrid cars and industrial robots in the US market. The results show that that technology diffusion, as represented by patent citations, can explain long-term sales for hybrid cars and industrial robots. On the other hand, interest diffusion, as represented by web search traffic, can help to improve the predictability of market sales of hybrid cars in the short-term. However, interest diffusion is difficult to explain the sales of industrial robots due to the different market characteristics. Finding indicates our proposed model can relatively well explain the diffusion of consumer goods.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26221625','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26221625"><span>A Hybrid Swarm Intelligence Algorithm for Intrusion Detection Using Significant Features.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Amudha, P; Karthik, S; Sivakumari, S</p> <p>2015-01-01</p> <p>Intrusion detection has become a main part of network security due to the huge number of attacks which affects the computers. This is due to the extensive growth of internet connectivity and accessibility to information systems worldwide. To deal with this problem, in this paper a hybrid algorithm is proposed to integrate Modified Artificial Bee Colony (MABC) with Enhanced Particle Swarm Optimization (EPSO) to predict the intrusion detection problem. The algorithms are combined together to find out better optimization results and the classification accuracies are obtained by 10-fold cross-validation method. The purpose of this paper is to select the most relevant features that can represent the pattern of the network traffic and test its effect on the success of the proposed hybrid classification algorithm. To investigate the performance of the proposed method, intrusion detection KDDCup'99 benchmark dataset from the UCI Machine Learning repository is used. The performance of the proposed method is compared with the other machine learning algorithms and found to be significantly different.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4491731','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4491731"><span>A Hybrid Swarm Intelligence Algorithm for Intrusion Detection Using Significant Features</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Amudha, P.; Karthik, S.; Sivakumari, S.</p> <p>2015-01-01</p> <p>Intrusion detection has become a main part of network security due to the huge number of attacks which affects the computers. This is due to the extensive growth of internet connectivity and accessibility to information systems worldwide. To deal with this problem, in this paper a hybrid algorithm is proposed to integrate Modified Artificial Bee Colony (MABC) with Enhanced Particle Swarm Optimization (EPSO) to predict the intrusion detection problem. The algorithms are combined together to find out better optimization results and the classification accuracies are obtained by 10-fold cross-validation method. The purpose of this paper is to select the most relevant features that can represent the pattern of the network traffic and test its effect on the success of the proposed hybrid classification algorithm. To investigate the performance of the proposed method, intrusion detection KDDCup'99 benchmark dataset from the UCI Machine Learning repository is used. The performance of the proposed method is compared with the other machine learning algorithms and found to be significantly different. PMID:26221625</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015AGUFM.H13E1593A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015AGUFM.H13E1593A"><span>Smart Fluids in Hydrology: Use of Non-Newtonian Fluids for Pore Structure Characterization</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Abou Najm, M. R.; Atallah, N. M.; Selker, J. S.; Roques, C.; Stewart, R. D.; Rupp, D. E.; Saad, G.; El-Fadel, M.</p> <p>2015-12-01</p> <p>Classic porous media characterization relies on typical infiltration experiments with Newtonian fluids (i.e., water) to estimate hydraulic conductivity. However, such experiments are generally not able to discern important characteristics such as pore size distribution or pore structure. We show that introducing non-Newtonian fluids provides additional unique flow signatures that can be used for improved pore structure characterization while still representing the functional hydraulic behavior of real porous media. We present a new method for experimentally estimating the pore structure of porous media using a combination of Newtonian and non-Newtonian fluids. The proposed method transforms results of N infiltration experiments using water and N-1 non-Newtonian solutions into a system of equations that yields N representative radii (Ri) and their corresponding percent contribution to flow (wi). This method allows for estimating the soil retention curve using only saturated experiments. Experimental and numerical validation comparing the functional flow behavior of different soils to their modeled flow with N representative radii revealed the ability of the proposed method to represent the water retention and infiltration behavior of real soils. The experimental results showed the ability of such fluids to outsmart Newtonian fluids and infer pore size distribution and unsaturated behavior using simple saturated experiments. Specifically, we demonstrate using synthetic porous media that the use of different non-Newtonian fluids enables the definition of the radii and corresponding percent contribution to flow of multiple representative pores, thus improving the ability of pore-scale models to mimic the functional behavior of real porous media in terms of flow and porosity. The results advance the knowledge towards conceptualizing the complexity of porous media and can potentially impact applications in fields like irrigation efficiencies, vadose zone hydrology, soil-root-plant continuum, carbon sequestration into geologic formations, soil remediation, petroleum reservoir engineering, oil exploration and groundwater modeling.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23623604','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23623604"><span>Evaluation of multiple muscle loads through multi-objective optimization with prediction of subjective satisfaction level: illustration by an application to handrail position for standing.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Chihara, Takanori; Seo, Akihiko</p> <p>2014-03-01</p> <p>Proposed here is an evaluation of multiple muscle loads and a procedure for determining optimum solutions to ergonomic design problems. The simultaneous muscle load evaluation is formulated as a multi-objective optimization problem, and optimum solutions are obtained for each participant. In addition, one optimum solution for all participants, which is defined as the compromise solution, is also obtained. Moreover, the proposed method provides both objective and subjective information to support the decision making of designers. The proposed method was applied to the problem of designing the handrail position for the sit-to-stand movement. The height and distance of the handrails were the design variables, and surface electromyograms of four muscles were measured. The optimization results suggest that the proposed evaluation represents the impressions of participants more completely than an independent use of muscle loads. In addition, the compromise solution is determined, and the benefits of the proposed method are examined. Copyright © 2013 Elsevier Ltd and The Ergonomics Society. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28386705','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28386705"><span>A new concept and a comprehensive evaluation of SYSMEX UF-1000i  flow cytometer to identify culture-negative urine specimens in patients with UTI.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Monsen, T; Ryden, P</p> <p>2017-09-01</p> <p>Urinary tract infections (UTIs) are among the most common bacterial infections in men and urine culture is gold standard for diagnosis. Considering the high prevalence of culture-negative specimens, any method that identifies such specimens is of interest. The aim was to evaluate a new screening concept for flow cytometry analysis (FCA). The outcomes were evaluated against urine culture, uropathogen species and three conventional screening methods. A prospective, consecutive study examined 1,312 urine specimens, collected during January and February 2012. The specimens were analyzed using the Sysmex UF1000i FCA. Based on the FCA data culture negative specimens were identified in a new model by use of linear discriminant analysis (FCA-LDA). In total 1,312 patients were included. In- and outpatients represented 19.6% and 79.4%, respectively; 68.3% of the specimens originated from women. Of the 610 culture-positive specimens, Escherichia coli represented 64%, enterococci 8% and Klebsiella spp. 7%. Screening with FCA-LDA at 95% sensitivity identified 42% (552/1312) as culture negative specimens when UTI was defined according to European guidelines. The proposed screening method was either superior or similar in comparison to the three conventional screening methods. In conclusion, the proposed/suggested and new FCA-LDA screening method was superior or similar to three conventional screening methods. We recommend the proposed screening method to be used in clinic to exclude culture negative specimens, to reduce workload, costs and the turnaround time. In addition, the FCA data may add information that enhance handling and support diagnosis of patients with suspected UTI pending urine culture [corrected].</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19750012772','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19750012772"><span>Imperial Valley's proposal to develop a guide for geothermal development within its county</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Pierson, D. E.</p> <p>1974-01-01</p> <p>A plan to develop the geothermal resources of the Imperial Valley of California is presented. The plan consists of development policies and includes text and graphics setting forth the objectives, principles, standards, and proposals. The plan allows developers to know the goals of the surrounding community and provides a method for decision making to be used by county representatives. A summary impact statement for the geothermal development aspects is provided.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28693003','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28693003"><span>Information-Theoretic Performance Analysis of Sensor Networks via Markov Modeling of Time Series Data.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Li, Yue; Jha, Devesh K; Ray, Asok; Wettergren, Thomas A; Yue Li; Jha, Devesh K; Ray, Asok; Wettergren, Thomas A; Wettergren, Thomas A; Li, Yue; Ray, Asok; Jha, Devesh K</p> <p>2018-06-01</p> <p>This paper presents information-theoretic performance analysis of passive sensor networks for detection of moving targets. The proposed method falls largely under the category of data-level information fusion in sensor networks. To this end, a measure of information contribution for sensors is formulated in a symbolic dynamics framework. The network information state is approximately represented as the largest principal component of the time series collected across the network. To quantify each sensor's contribution for generation of the information content, Markov machine models as well as x-Markov (pronounced as cross-Markov) machine models, conditioned on the network information state, are constructed; the difference between the conditional entropies of these machines is then treated as an approximate measure of information contribution by the respective sensors. The x-Markov models represent the conditional temporal statistics given the network information state. The proposed method has been validated on experimental data collected from a local area network of passive sensors for target detection, where the statistical characteristics of environmental disturbances are similar to those of the target signal in the sense of time scale and texture. A distinctive feature of the proposed algorithm is that the network decisions are independent of the behavior and identity of the individual sensors, which is desirable from computational perspectives. Results are presented to demonstrate the proposed method's efficacy to correctly identify the presence of a target with very low false-alarm rates. The performance of the underlying algorithm is compared with that of a recent data-driven, feature-level information fusion algorithm. It is shown that the proposed algorithm outperforms the other algorithm.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/22363527','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/22363527"><span>A new method for species identification via protein-coding and non-coding DNA barcodes by combining machine learning with bioinformatic methods.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Zhang, Ai-bing; Feng, Jie; Ward, Robert D; Wan, Ping; Gao, Qiang; Wu, Jun; Zhao, Wei-zhong</p> <p>2012-01-01</p> <p>Species identification via DNA barcodes is contributing greatly to current bioinventory efforts. The initial, and widely accepted, proposal was to use the protein-coding cytochrome c oxidase subunit I (COI) region as the standard barcode for animals, but recently non-coding internal transcribed spacer (ITS) genes have been proposed as candidate barcodes for both animals and plants. However, achieving a robust alignment for non-coding regions can be problematic. Here we propose two new methods (DV-RBF and FJ-RBF) to address this issue for species assignment by both coding and non-coding sequences that take advantage of the power of machine learning and bioinformatics. We demonstrate the value of the new methods with four empirical datasets, two representing typical protein-coding COI barcode datasets (neotropical bats and marine fish) and two representing non-coding ITS barcodes (rust fungi and brown algae). Using two random sub-sampling approaches, we demonstrate that the new methods significantly outperformed existing Neighbor-joining (NJ) and Maximum likelihood (ML) methods for both coding and non-coding barcodes when there was complete species coverage in the reference dataset. The new methods also out-performed NJ and ML methods for non-coding sequences in circumstances of potentially incomplete species coverage, although then the NJ and ML methods performed slightly better than the new methods for protein-coding barcodes. A 100% success rate of species identification was achieved with the two new methods for 4,122 bat queries and 5,134 fish queries using COI barcodes, with 95% confidence intervals (CI) of 99.75-100%. The new methods also obtained a 96.29% success rate (95%CI: 91.62-98.40%) for 484 rust fungi queries and a 98.50% success rate (95%CI: 96.60-99.37%) for 1094 brown algae queries, both using ITS barcodes.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017JBO....22f6008W','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017JBO....22f6008W"><span>Optical coherence tomography angiography-based capillary velocimetry</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Wang, Ruikang K.; Zhang, Qinqin; Li, Yuandong; Song, Shaozhen</p> <p>2017-06-01</p> <p>Challenge persists in the field of optical coherence tomography (OCT) when it is required to quantify capillary blood flow within tissue beds in vivo. We propose a useful approach to statistically estimate the mean capillary flow velocity using a model-based statistical method of eigendecomposition (ED) analysis of the complex OCT signals obtained with the OCT angiography (OCTA) scanning protocol. ED-based analysis is achieved by the covariance matrix of the ensemble complex OCT signals, upon which the eigenvalues and eigenvectors that represent the subsets of the signal makeup are calculated. From this analysis, the signals due to moving particles can be isolated by employing an adaptive regression filter to remove the eigencomponents that represent static tissue signals. The mean frequency (MF) of moving particles can be estimated by the first lag-one autocorrelation of the corresponding eigenvectors. Three important parameters are introduced, including the blood flow signal power representing the presence of blood flow (i.e., OCTA signals), the MF indicating the mean velocity of blood flow, and the frequency bandwidth describing the temporal flow heterogeneity within a scanned tissue volume. The proposed approach is tested using scattering phantoms, in which microfluidic channels are used to simulate the functional capillary vessels that are perfused with the scattering intralipid solution. The results indicate a linear relationship between the MF and mean flow velocity. In vivo animal experiments are also conducted by imaging mouse brain with distal middle cerebral artery ligation to test the capability of the method to image the changes in capillary flows in response to an ischemic insult, demonstrating the practical usefulness of the proposed method for providing important quantifiable information about capillary tissue beds in the investigations of neurological conditions in vivo.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26209474','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26209474"><span>A Practical Application of Value of Information and Prospective Payback of Research to Prioritize Evaluative Research.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Andronis, Lazaros; Billingham, Lucinda J; Bryan, Stirling; James, Nicholas D; Barton, Pelham M</p> <p>2016-04-01</p> <p>Efforts to ensure that funded research represents "value for money" have led to increasing calls for the use of analytic methods in research prioritization. A number of analytic approaches have been proposed to assist research funding decisions, the most prominent of which are value of information (VOI) and prospective payback of research (PPoR). Despite the increasing interest in the topic, there are insufficient VOI and PPoR applications on the same case study to contrast their methods and compare their outcomes. We undertook VOI and PPoR analyses to determine the value of conducting 2 proposed research programs. The application served as a vehicle for identifying differences and similarities between the methods, provided insight into the assumptions and practical requirements of undertaking prospective analyses for research prioritization, and highlighted areas for future research. VOI and PPoR were applied to case studies representing proposals for clinical trials in advanced non-small-cell lung cancer and prostate cancer. Decision models were built to synthesize the evidence available prior to the funding decision. VOI (expected value of perfect and sample information) and PPoR (PATHS model) analyses were undertaken using the developed models. VOI and PPoR results agreed in direction, suggesting that the proposed trials would be cost-effective investments. However, results differed in magnitude, largely due to the way each method conceptualizes the possible outcomes of further research and the implementation of research results in practice. Compared with VOI, PPoR is less complex but requires more assumptions. Although the approaches are not free from limitations, they can provide useful input for research funding decisions. © The Author(s) 2015.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/ADA224178','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/ADA224178"><span>Designing Intelligent Computer Aided Instruction Systems with Integrated Knowledge Representation Schemes</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.dtic.mil/">DTIC Science & Technology</a></p> <p></p> <p>1990-06-01</p> <p>the form of structured objects was first pioneered by Marvin Minsky . In his seminal article " A Framework for Representing Knowl- edge" he introduced... Minsky felt that the existing methods of knowledge representation were too finely grained and he proposed that knowledge is more than just a...not work" in realistic, complex domains. ( Minsky , 1981, pp. 95-128) According to Minsky "A frame is a data-structure for representing a stereo- typed</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_8");'>8</a></li> <li><a href="#" onclick='return showDiv("page_9");'>9</a></li> <li class="active"><span>10</span></li> <li><a href="#" onclick='return showDiv("page_11");'>11</a></li> <li><a href="#" onclick='return showDiv("page_12");'>12</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_10 --> <div id="page_11" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_9");'>9</a></li> <li><a href="#" onclick='return showDiv("page_10");'>10</a></li> <li class="active"><span>11</span></li> <li><a href="#" onclick='return showDiv("page_12");'>12</a></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="201"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2008JPhCS.135a2032D','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2008JPhCS.135a2032D"><span>On the use of the Reciprocity Gap Functional in inverse scattering with near-field data: An application to mammography</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Delbary, Fabrice; Aramini, Riccardo; Bozza, Giovanni; Brignone, Massimo; Piana, Michele</p> <p>2008-11-01</p> <p>Microwave tomography is a non-invasive approach to the early diagnosis of breast cancer. However the problem of visualizing tumors from diffracted microwaves is a difficult nonlinear ill-posed inverse scattering problem. We propose a qualitative approach to the solution of such a problem, whereby the shape and location of cancerous tissues can be detected by means of a combination of the Reciprocity Gap Functional method and the Linear Sampling method. We validate this approach to synthetic near-fields produced by a finite element method for boundary integral equations, where the breast is mimicked by the axial view of two nested cylinders, the external one representing the skin and the internal one representing the fat tissue.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2898756','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2898756"><span>Conduct of a personal radiofrequency electromagnetic field measurement study: proposed study protocol</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p></p> <p>2010-01-01</p> <p>Background The development of new wireless communication technologies that emit radio frequency electromagnetic fields (RF-EMF) is ongoing, but little is known about the RF-EMF exposure distribution in the general population. Previous attempts to measure personal exposure to RF-EMF have used different measurement protocols and analysis methods making comparisons between exposure situations across different study populations very difficult. As a result, observed differences in exposure levels between study populations may not reflect real exposure differences but may be in part, or wholly due to methodological differences. Methods The aim of this paper is to develop a study protocol for future personal RF-EMF exposure studies based on experience drawn from previous research. Using the current knowledge base, we propose procedures for the measurement of personal exposure to RF-EMF, data collection, data management and analysis, and methods for the selection and instruction of study participants. Results We have identified two basic types of personal RF-EMF measurement studies: population surveys and microenvironmental measurements. In the case of a population survey, the unit of observation is the individual and a randomly selected representative sample of the population is needed to obtain reliable results. For microenvironmental measurements, study participants are selected in order to represent typical behaviours in different microenvironments. These two study types require different methods and procedures. Conclusion Applying our proposed common core procedures in future personal measurement studies will allow direct comparisons of personal RF-EMF exposures in different populations and study areas. PMID:20487532</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017JPRS..129...41Z','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017JPRS..129...41Z"><span>A morphologically preserved multi-resolution TIN surface modeling and visualization method for virtual globes</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Zheng, Xianwei; Xiong, Hanjiang; Gong, Jianya; Yue, Linwei</p> <p>2017-07-01</p> <p>Virtual globes play an important role in representing three-dimensional models of the Earth. To extend the functioning of a virtual globe beyond that of a "geobrowser", the accuracy of the geospatial data in the processing and representation should be of special concern for the scientific analysis and evaluation. In this study, we propose a method for the processing of large-scale terrain data for virtual globe visualization and analysis. The proposed method aims to construct a morphologically preserved multi-resolution triangulated irregular network (TIN) pyramid for virtual globes to accurately represent the landscape surface and simultaneously satisfy the demands of applications at different scales. By introducing cartographic principles, the TIN model in each layer is controlled with a data quality standard to formulize its level of detail generation. A point-additive algorithm is used to iteratively construct the multi-resolution TIN pyramid. The extracted landscape features are also incorporated to constrain the TIN structure, thus preserving the basic morphological shapes of the terrain surface at different levels. During the iterative construction process, the TIN in each layer is seamlessly partitioned based on a virtual node structure, and tiled with a global quadtree structure. Finally, an adaptive tessellation approach is adopted to eliminate terrain cracks in the real-time out-of-core spherical terrain rendering. The experiments undertaken in this study confirmed that the proposed method performs well in multi-resolution terrain representation, and produces high-quality underlying data that satisfy the demands of scientific analysis and evaluation.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2011MeScR..11...71B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2011MeScR..11...71B"><span>Methodology for the Evaluation of the Algorithms for Text Line Segmentation Based on Extended Binary Classification</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Brodic, D.</p> <p>2011-01-01</p> <p>Text line segmentation represents the key element in the optical character recognition process. Hence, testing of text line segmentation algorithms has substantial relevance. All previously proposed testing methods deal mainly with text database as a template. They are used for testing as well as for the evaluation of the text segmentation algorithm. In this manuscript, methodology for the evaluation of the algorithm for text segmentation based on extended binary classification is proposed. It is established on the various multiline text samples linked with text segmentation. Their results are distributed according to binary classification. Final result is obtained by comparative analysis of cross linked data. At the end, its suitability for different types of scripts represents its main advantage.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5112352','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5112352"><span>A Self-Organizing Incremental Spatiotemporal Associative Memory Networks Model for Problems with Hidden State</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p></p> <p>2016-01-01</p> <p>Identifying the hidden state is important for solving problems with hidden state. We prove any deterministic partially observable Markov decision processes (POMDP) can be represented by a minimal, looping hidden state transition model and propose a heuristic state transition model constructing algorithm. A new spatiotemporal associative memory network (STAMN) is proposed to realize the minimal, looping hidden state transition model. STAMN utilizes the neuroactivity decay to realize the short-term memory, connection weights between different nodes to represent long-term memory, presynaptic potentials, and synchronized activation mechanism to complete identifying and recalling simultaneously. Finally, we give the empirical illustrations of the STAMN and compare the performance of the STAMN model with that of other methods. PMID:27891146</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015InvPr..31b5004D','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015InvPr..31b5004D"><span>Dynamic SPECT reconstruction from few projections: a sparsity enforced matrix factorization approach</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Ding, Qiaoqiao; Zan, Yunlong; Huang, Qiu; Zhang, Xiaoqun</p> <p>2015-02-01</p> <p>The reconstruction of dynamic images from few projection data is a challenging problem, especially when noise is present and when the dynamic images are vary fast. In this paper, we propose a variational model, sparsity enforced matrix factorization (SEMF), based on low rank matrix factorization of unknown images and enforced sparsity constraints for representing both coefficients and bases. The proposed model is solved via an alternating iterative scheme for which each subproblem is convex and involves the efficient alternating direction method of multipliers (ADMM). The convergence of the overall alternating scheme for the nonconvex problem relies upon the Kurdyka-Łojasiewicz property, recently studied by Attouch et al (2010 Math. Oper. Res. 35 438) and Attouch et al (2013 Math. Program. 137 91). Finally our proof-of-concept simulation on 2D dynamic images shows the advantage of the proposed method compared to conventional methods.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4983804','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4983804"><span>Automatic liver tumor segmentation on computed tomography for patient treatment planning and monitoring</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Moghbel, Mehrdad; Mashohor, Syamsiah; Mahmud, Rozi; Saripan, M. Iqbal Bin</p> <p>2016-01-01</p> <p>Segmentation of liver tumors from Computed Tomography (CT) and tumor burden analysis play an important role in the choice of therapeutic strategies for liver diseases and treatment monitoring. In this paper, a new segmentation method for liver tumors from contrast-enhanced CT imaging is proposed. As manual segmentation of tumors for liver treatment planning is both labor intensive and time-consuming, a highly accurate automatic tumor segmentation is desired. The proposed framework is fully automatic requiring no user interaction. The proposed segmentation evaluated on real-world clinical data from patients is based on a hybrid method integrating cuckoo optimization and fuzzy c-means algorithm with random walkers algorithm. The accuracy of the proposed method was validated using a clinical liver dataset containing one of the highest numbers of tumors utilized for liver tumor segmentation containing 127 tumors in total with further validation of the results by a consultant radiologist. The proposed method was able to achieve one of the highest accuracies reported in the literature for liver tumor segmentation compared to other segmentation methods with a mean overlap error of 22.78 % and dice similarity coefficient of 0.75 in 3Dircadb dataset and a mean overlap error of 15.61 % and dice similarity coefficient of 0.81 in MIDAS dataset. The proposed method was able to outperform most other tumor segmentation methods reported in the literature while representing an overlap error improvement of 6 % compared to one of the best performing automatic methods in the literature. The proposed framework was able to provide consistently accurate results considering the number of tumors and the variations in tumor contrast enhancements and tumor appearances while the tumor burden was estimated with a mean error of 0.84 % in 3Dircadb dataset. PMID:27540353</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/1242675','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/1242675"><span>Preliminary test results in support of integrated EPP and SMT design methods development</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Wang, Yanli; Jetter, Robert I.; Sham, T. -L.</p> <p>2016-02-09</p> <p>The proposed integrated Elastic Perfectly-Plastic (EPP) and Simplified Model Test (SMT) methodology consists of incorporating a SMT data-based approach for creep-fatigue damage evaluation into the EPP methodology to avoid using the creep-fatigue interaction diagram (the D diagram) and to minimize over-conservatism while properly accounting for localized defects and stress risers. To support the implementation of the proposed code rules and to verify their applicability, a series of thermomechanical tests have been initiated. One test concept, the Simplified Model Test (SMT), takes into account the stress and strain redistribution in real structures by including representative follow-up characteristics in the test specimen.more » The second test concept is the two-bar thermal ratcheting tests with cyclic loading at high temperatures using specimens representing key features of potential component designs. This report summaries the previous SMT results on Alloy 617, SS316H and SS304H and presents the recent development on SMT approach on Alloy 617. These SMT specimen data are also representative of component loading conditions and have been used as part of the verification of the proposed integrated EPP and SMT design methods development. The previous two-bar thermal ratcheting test results on Alloy 617 and SS316H are also summarized and the new results from two bar thermal ratcheting tests on SS316H at a lower temperature range are reported.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29709314','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29709314"><span>Radial artery pulse waveform analysis based on curve fitting using discrete Fourier series.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Jiang, Zhixing; Zhang, David; Lu, Guangming</p> <p>2018-04-19</p> <p>Radial artery pulse diagnosis has been playing an important role in traditional Chinese medicine (TCM). For its non-invasion and convenience, the pulse diagnosis has great significance in diseases analysis of modern medicine. The practitioners sense the pulse waveforms in patients' wrist to make diagnoses based on their non-objective personal experience. With the researches of pulse acquisition platforms and computerized analysis methods, the objective study on pulse diagnosis can help the TCM to keep up with the development of modern medicine. In this paper, we propose a new method to extract feature from pulse waveform based on discrete Fourier series (DFS). It regards the waveform as one kind of signal that consists of a series of sub-components represented by sine and cosine (SC) signals with different frequencies and amplitudes. After the pulse signals are collected and preprocessed, we fit the average waveform for each sample using discrete Fourier series by least squares. The feature vector is comprised by the coefficients of discrete Fourier series function. Compared with the fitting method using Gaussian mixture function, the fitting errors of proposed method are smaller, which indicate that our method can represent the original signal better. The classification performance of proposed feature is superior to the other features extracted from waveform, liking auto-regression model and Gaussian mixture model. The coefficients of optimized DFS function, who is used to fit the arterial pressure waveforms, can obtain better performance in modeling the waveforms and holds more potential information for distinguishing different psychological states. Copyright © 2018 Elsevier B.V. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4124712','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4124712"><span>Malware Analysis Using Visualized Image Matrices</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Im, Eul Gyu</p> <p>2014-01-01</p> <p>This paper proposes a novel malware visual analysis method that contains not only a visualization method to convert binary files into images, but also a similarity calculation method between these images. The proposed method generates RGB-colored pixels on image matrices using the opcode sequences extracted from malware samples and calculates the similarities for the image matrices. Particularly, our proposed methods are available for packed malware samples by applying them to the execution traces extracted through dynamic analysis. When the images are generated, we can reduce the overheads by extracting the opcode sequences only from the blocks that include the instructions related to staple behaviors such as functions and application programming interface (API) calls. In addition, we propose a technique that generates a representative image for each malware family in order to reduce the number of comparisons for the classification of unknown samples and the colored pixel information in the image matrices is used to calculate the similarities between the images. Our experimental results show that the image matrices of malware can effectively be used to classify malware families both statically and dynamically with accuracy of 0.9896 and 0.9732, respectively. PMID:25133202</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2009SPIE.7241E..0AK','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2009SPIE.7241E..0AK"><span>Preferred color correction for digital LCD TVs</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kim, Kyoung Tae; Kim, Choon-Woo; Ahn, Ji-Young; Kang, Dong-Woo; Shin, Hyun-Ho</p> <p>2009-01-01</p> <p>Instead of colorimetirc color reproduction, preferred color correction is applied for digital TVs to improve subjective image quality. First step of the preferred color correction is to survey the preferred color coordinates of memory colors. This can be achieved by the off-line human visual tests. Next step is to extract pixels of memory colors representing skin, grass and sky. For the detected pixels, colors are shifted towards the desired coordinates identified in advance. This correction process may result in undesirable contours on the boundaries between the corrected and un-corrected areas. For digital TV applications, the process of extraction and correction should be applied in every frame of the moving images. This paper presents a preferred color correction method in LCH color space. Values of chroma and hue are corrected independently. Undesirable contours on the boundaries of correction are minimized. The proposed method change the coordinates of memory color pixels towards the target color coordinates. Amount of correction is determined based on the averaged coordinate of the extracted pixels. The proposed method maintains the relative color difference within memory color areas. Performance of the proposed method is evaluated using the paired comparison. Results of experiments indicate that the proposed method can reproduce perceptually pleasing images to viewers.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29469889','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29469889"><span>Optimized star sensors laboratory calibration method using a regularization neural network.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Zhang, Chengfen; Niu, Yanxiong; Zhang, Hao; Lu, Jiazhen</p> <p>2018-02-10</p> <p>High-precision ground calibration is essential to ensure the performance of star sensors. However, the complex distortion and multi-error coupling have brought great difficulties to traditional calibration methods, especially for large field of view (FOV) star sensors. Although increasing the complexity of models is an effective way to improve the calibration accuracy, it significantly increases the demand for calibration data. In order to achieve high-precision calibration of star sensors with large FOV, a novel laboratory calibration method based on a regularization neural network is proposed. A multi-layer structure neural network is designed to represent the mapping of the star vector and the corresponding star point coordinate directly. To ensure the generalization performance of the network, regularization strategies are incorporated into the net structure and the training algorithm. Simulation and experiment results demonstrate that the proposed method can achieve high precision with less calibration data and without any other priori information. Compared with traditional methods, the calibration error of the star sensor decreased by about 30%. The proposed method can satisfy the precision requirement for large FOV star sensors.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26040489','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26040489"><span>UNCLES: method for the identification of genes differentially consistently co-expressed in a specific subset of datasets.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Abu-Jamous, Basel; Fa, Rui; Roberts, David J; Nandi, Asoke K</p> <p>2015-06-04</p> <p>Collective analysis of the increasingly emerging gene expression datasets are required. The recently proposed binarisation of consensus partition matrices (Bi-CoPaM) method can combine clustering results from multiple datasets to identify the subsets of genes which are consistently co-expressed in all of the provided datasets in a tuneable manner. However, results validation and parameter setting are issues that complicate the design of such methods. Moreover, although it is a common practice to test methods by application to synthetic datasets, the mathematical models used to synthesise such datasets are usually based on approximations which may not always be sufficiently representative of real datasets. Here, we propose an unsupervised method for the unification of clustering results from multiple datasets using external specifications (UNCLES). This method has the ability to identify the subsets of genes consistently co-expressed in a subset of datasets while being poorly co-expressed in another subset of datasets, and to identify the subsets of genes consistently co-expressed in all given datasets. We also propose the M-N scatter plots validation technique and adopt it to set the parameters of UNCLES, such as the number of clusters, automatically. Additionally, we propose an approach for the synthesis of gene expression datasets using real data profiles in a way which combines the ground-truth-knowledge of synthetic data and the realistic expression values of real data, and therefore overcomes the problem of faithfulness of synthetic expression data modelling. By application to those datasets, we validate UNCLES while comparing it with other conventional clustering methods, and of particular relevance, biclustering methods. We further validate UNCLES by application to a set of 14 real genome-wide yeast datasets as it produces focused clusters that conform well to known biological facts. Furthermore, in-silico-based hypotheses regarding the function of a few previously unknown genes in those focused clusters are drawn. The UNCLES method, the M-N scatter plots technique, and the expression data synthesis approach will have wide application for the comprehensive analysis of genomic and other sources of multiple complex biological datasets. Moreover, the derived in-silico-based biological hypotheses represent subjects for future functional studies.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25342111','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25342111"><span>Listeria booriae sp. nov. and Listeria newyorkensis sp. nov., from food processing environments in the USA.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Weller, Daniel; Andrus, Alexis; Wiedmann, Martin; den Bakker, Henk C</p> <p>2015-01-01</p> <p>Sampling of seafood and dairy processing facilities in the north-eastern USA produced 18 isolates of Listeria spp. that could not be identified at the species-level using traditional phenotypic and genotypic identification methods. Results of phenotypic and genotypic analyses suggested that the isolates represent two novel species with an average nucleotide blast identity of less than 92% with previously described species of the genus Listeria. Phylogenetic analyses based on whole genome sequences, 16S rRNA gene and sigB gene sequences confirmed that the isolates represented by type strain FSL M6-0635(T) and FSL A5-0209 cluster phylogenetically with Listeria cornellensis. Phylogenetic analyses also showed that the isolates represented by type strain FSL A5-0281(T) cluster phylogenetically with Listeria riparia. The name Listeria booriae sp. nov. is proposed for the species represented by type strain FSL A5-0281(T) ( =DSM 28860(T) =LMG 28311(T)), and the name Listeria newyorkensis sp. nov. is proposed for the species represented by type strain FSL M6-0635(T) ( =DSM 28861(T) =LMG 28310(T)). Phenotypic and genotypic analyses suggest that neither species is pathogenic. © 2015 IUMS.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.gpo.gov/fdsys/pkg/FR-2010-09-20/pdf/2010-23417.pdf','FEDREG'); return false;" href="https://www.gpo.gov/fdsys/pkg/FR-2010-09-20/pdf/2010-23417.pdf"><span>75 FR 57277 - Agency Information Collection Activities: Proposed Collection: Comment Request</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.gpo.gov/fdsys/browse/collection.action?collectionCode=FR">Federal Register 2010, 2011, 2012, 2013, 2014</a></p> <p></p> <p>2010-09-20</p> <p>... patients served, pharmaceuticals prescribed, pricing, and other sources of support to provide AIDS... formulary, eligibility criteria for enrollment, and cost-saving strategies including coordinating with Medicaid). The quarterly report represents the best method for HRSA to determine how ADAP grants are being...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/20389670','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/20389670"><span>A Parallel Product-Convolution approach for representing the depth varying Point Spread Functions in 3D widefield microscopy based on principal component analysis.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Arigovindan, Muthuvel; Shaevitz, Joshua; McGowan, John; Sedat, John W; Agard, David A</p> <p>2010-03-29</p> <p>We address the problem of computational representation of image formation in 3D widefield fluorescence microscopy with depth varying spherical aberrations. We first represent 3D depth-dependent point spread functions (PSFs) as a weighted sum of basis functions that are obtained by principal component analysis (PCA) of experimental data. This representation is then used to derive an approximating structure that compactly expresses the depth variant response as a sum of few depth invariant convolutions pre-multiplied by a set of 1D depth functions, where the convolving functions are the PCA-derived basis functions. The model offers an efficient and convenient trade-off between complexity and accuracy. For a given number of approximating PSFs, the proposed method results in a much better accuracy than the strata based approximation scheme that is currently used in the literature. In addition to yielding better accuracy, the proposed methods automatically eliminate the noise in the measured PSFs.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2008JPRS...63..593D','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2008JPRS...63..593D"><span>Efficiently computing and deriving topological relation matrices between complex regions with broad boundaries</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Du, Shihong; Guo, Luo; Wang, Qiao; Qin, Qimin</p> <p></p> <p>The extended 9-intersection matrix is used to formalize topological relations between uncertain regions while it is designed to satisfy the requirements at a concept level, and to deal with the complex regions with broad boundaries (CBBRs) as a whole without considering their hierarchical structures. In contrast to simple regions with broad boundaries, CBBRs have complex hierarchical structures. Therefore, it is necessary to take into account the complex hierarchical structure and to represent the topological relations between all regions in CBBRs as a relation matrix, rather than using the extended 9-intersection matrix to determine topological relations. In this study, a tree model is first used to represent the intrinsic configuration of CBBRs hierarchically. Then, the reasoning tables are presented for deriving topological relations between child, parent and sibling regions from the relations between two given regions in CBBRs. Finally, based on the reasoning, efficient methods are proposed to compute and derive the topological relation matrix. The proposed methods can be incorporated into spatial databases to facilitate geometric-oriented applications.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018E3SWC..3302077E','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018E3SWC..3302077E"><span>Numerical verification of composite rods theory on multi-story buildings analysis</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>El-Din Mansour, Alaa; Filatov, Vladimir; Gandzhuntsev, Michael; Ryasny, Nikita</p> <p>2018-03-01</p> <p>In the article, a verification proposal of the composite rods theory on the structural analysis of skeletons for high-rise buildings. A testing design model been formed on which horizontal elements been represented by a multilayer cantilever beam operates on transverse bending on which slabs are connected with a moment-non-transferring connections and a multilayer columns represents the vertical elements. Those connections are sufficiently enough to form a shearing action can be approximated by a certain shear forces function, the thing which significantly reduces the overall static indeterminacy degree of the structural model. A system of differential equations describe the operation mechanism of the multilayer rods that solved using the numerical approach of successive approximations method. The proposed methodology to be used while preliminary calculations for the sake of determining the rigidity characteristics of the structure; are needed. In addition, for a qualitative assessment of the results obtained by other methods when performing calculations with the verification aims.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014JSV...333.2776N','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014JSV...333.2776N"><span>A novel sensitivity-based method for damage detection of structures under unknown periodic excitations</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Naseralavi, S. S.; Salajegheh, E.; Fadaee, M. J.; Salajegheh, J.</p> <p>2014-06-01</p> <p>This paper presents a technique for damage detection in structures under unknown periodic excitations using the transient displacement response. The method is capable of identifying the damage parameters without finding the input excitations. We first define the concept of displacement space as a linear space in which each point represents displacements of structure under an excitation and initial condition. Roughly speaking, the method is based on the fact that structural displacements under free and forced vibrations are associated with two parallel subspaces in the displacement space. Considering this novel geometrical viewpoint, an equation called kernel parallelization equation (KPE) is derived for damage detection under unknown periodic excitations and a sensitivity-based algorithm for solving KPE is proposed accordingly. The method is evaluated via three case studies under periodic excitations, which confirm the efficiency of the proposed method.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5271418','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5271418"><span>Maximum Likelihood Reconstruction for Magnetic Resonance Fingerprinting</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Zhao, Bo; Setsompop, Kawin; Ye, Huihui; Cauley, Stephen; Wald, Lawrence L.</p> <p>2017-01-01</p> <p>This paper introduces a statistical estimation framework for magnetic resonance (MR) fingerprinting, a recently proposed quantitative imaging paradigm. Within this framework, we present a maximum likelihood (ML) formalism to estimate multiple parameter maps directly from highly undersampled, noisy k-space data. A novel algorithm, based on variable splitting, the alternating direction method of multipliers, and the variable projection method, is developed to solve the resulting optimization problem. Representative results from both simulations and in vivo experiments demonstrate that the proposed approach yields significantly improved accuracy in parameter estimation, compared to the conventional MR fingerprinting reconstruction. Moreover, the proposed framework provides new theoretical insights into the conventional approach. We show analytically that the conventional approach is an approximation to the ML reconstruction; more precisely, it is exactly equivalent to the first iteration of the proposed algorithm for the ML reconstruction, provided that a gridding reconstruction is used as an initialization. PMID:26915119</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_9");'>9</a></li> <li><a href="#" onclick='return showDiv("page_10");'>10</a></li> <li class="active"><span>11</span></li> <li><a href="#" onclick='return showDiv("page_12");'>12</a></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_11 --> <div id="page_12" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_10");'>10</a></li> <li><a href="#" onclick='return showDiv("page_11");'>11</a></li> <li class="active"><span>12</span></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="221"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26915119','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26915119"><span>Maximum Likelihood Reconstruction for Magnetic Resonance Fingerprinting.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Zhao, Bo; Setsompop, Kawin; Ye, Huihui; Cauley, Stephen F; Wald, Lawrence L</p> <p>2016-08-01</p> <p>This paper introduces a statistical estimation framework for magnetic resonance (MR) fingerprinting, a recently proposed quantitative imaging paradigm. Within this framework, we present a maximum likelihood (ML) formalism to estimate multiple MR tissue parameter maps directly from highly undersampled, noisy k-space data. A novel algorithm, based on variable splitting, the alternating direction method of multipliers, and the variable projection method, is developed to solve the resulting optimization problem. Representative results from both simulations and in vivo experiments demonstrate that the proposed approach yields significantly improved accuracy in parameter estimation, compared to the conventional MR fingerprinting reconstruction. Moreover, the proposed framework provides new theoretical insights into the conventional approach. We show analytically that the conventional approach is an approximation to the ML reconstruction; more precisely, it is exactly equivalent to the first iteration of the proposed algorithm for the ML reconstruction, provided that a gridding reconstruction is used as an initialization.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018JSV...425..170F','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018JSV...425..170F"><span>Singular boundary method for wave propagation analysis in periodic structures</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Fu, Zhuojia; Chen, Wen; Wen, Pihua; Zhang, Chuanzeng</p> <p>2018-07-01</p> <p>A strong-form boundary collocation method, the singular boundary method (SBM), is developed in this paper for the wave propagation analysis at low and moderate wavenumbers in periodic structures. The SBM is of several advantages including mathematically simple, easy-to-program, meshless with the application of the concept of origin intensity factors in order to eliminate the singularity of the fundamental solutions and avoid the numerical evaluation of the singular integrals in the boundary element method. Due to the periodic behaviors of the structures, the SBM coefficient matrix can be represented as a block Toeplitz matrix. By employing three different fast Toeplitz-matrix solvers, the computational time and storage requirements are significantly reduced in the proposed SBM analysis. To demonstrate the effectiveness of the proposed SBM formulation for wave propagation analysis in periodic structures, several benchmark examples are presented and discussed The proposed SBM results are compared with the analytical solutions, the reference results and the COMSOL software.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4934327','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4934327"><span>Multi-Target State Extraction for the SMC-PHD Filter</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Si, Weijian; Wang, Liwei; Qu, Zhiyu</p> <p>2016-01-01</p> <p>The sequential Monte Carlo probability hypothesis density (SMC-PHD) filter has been demonstrated to be a favorable method for multi-target tracking. However, the time-varying target states need to be extracted from the particle approximation of the posterior PHD, which is difficult to implement due to the unknown relations between the large amount of particles and the PHD peaks representing potential target locations. To address this problem, a novel multi-target state extraction algorithm is proposed in this paper. By exploiting the information of measurements and particle likelihoods in the filtering stage, we propose a validation mechanism which aims at selecting effective measurements and particles corresponding to detected targets. Subsequently, the state estimates of the detected and undetected targets are performed separately: the former are obtained from the particle clusters directed by effective measurements, while the latter are obtained from the particles corresponding to undetected targets via clustering method. Simulation results demonstrate that the proposed method yields better estimation accuracy and reliability compared to existing methods. PMID:27322274</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/18566488','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/18566488"><span>Script-independent text line segmentation in freestyle handwritten documents.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Li, Yi; Zheng, Yefeng; Doermann, David; Jaeger, Stefan; Li, Yi</p> <p>2008-08-01</p> <p>Text line segmentation in freestyle handwritten documents remains an open document analysis problem. Curvilinear text lines and small gaps between neighboring text lines present a challenge to algorithms developed for machine printed or hand-printed documents. In this paper, we propose a novel approach based on density estimation and a state-of-the-art image segmentation technique, the level set method. From an input document image, we estimate a probability map, where each element represents the probability that the underlying pixel belongs to a text line. The level set method is then exploited to determine the boundary of neighboring text lines by evolving an initial estimate. Unlike connected component based methods ( [1], [2] for example), the proposed algorithm does not use any script-specific knowledge. Extensive quantitative experiments on freestyle handwritten documents with diverse scripts, such as Arabic, Chinese, Korean, and Hindi, demonstrate that our algorithm consistently outperforms previous methods [1]-[3]. Further experiments show the proposed algorithm is robust to scale change, rotation, and noise.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018MSSP..101..435L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018MSSP..101..435L"><span>Train axle bearing fault detection using a feature selection scheme based multi-scale morphological filter</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Li, Yifan; Liang, Xihui; Lin, Jianhui; Chen, Yuejian; Liu, Jianxin</p> <p>2018-02-01</p> <p>This paper presents a novel signal processing scheme, feature selection based multi-scale morphological filter (MMF), for train axle bearing fault detection. In this scheme, more than 30 feature indicators of vibration signals are calculated for axle bearings with different conditions and the features which can reflect fault characteristics more effectively and representatively are selected using the max-relevance and min-redundancy principle. Then, a filtering scale selection approach for MMF based on feature selection and grey relational analysis is proposed. The feature selection based MMF method is tested on diagnosis of artificially created damages of rolling bearings of railway trains. Experimental results show that the proposed method has a superior performance in extracting fault features of defective train axle bearings. In addition, comparisons are performed with the kurtosis criterion based MMF and the spectral kurtosis criterion based MMF. The proposed feature selection based MMF method outperforms these two methods in detection of train axle bearing faults.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014IJGS...43...32B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014IJGS...43...32B"><span>Extracting TSK-type Neuro-Fuzzy model using the Hunting search algorithm</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Bouzaida, Sana; Sakly, Anis; M'Sahli, Faouzi</p> <p>2014-01-01</p> <p>This paper proposes a Takagi-Sugeno-Kang (TSK) type Neuro-Fuzzy model tuned by a novel metaheuristic optimization algorithm called Hunting Search (HuS). The HuS algorithm is derived based on a model of group hunting of animals such as lions, wolves, and dolphins when looking for a prey. In this study, the structure and parameters of the fuzzy model are encoded into a particle. Thus, the optimal structure and parameters are achieved simultaneously. The proposed method was demonstrated through modeling and control problems, and the results have been compared with other optimization techniques. The comparisons indicate that the proposed method represents a powerful search approach and an effective optimization technique as it can extract the accurate TSK fuzzy model with an appropriate number of rules.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018PhyA..491..406C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018PhyA..491..406C"><span>A numerical solution for a variable-order reaction-diffusion model by using fractional derivatives with non-local and non-singular kernel</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Coronel-Escamilla, A.; Gómez-Aguilar, J. F.; Torres, L.; Escobar-Jiménez, R. F.</p> <p>2018-02-01</p> <p>A reaction-diffusion system can be represented by the Gray-Scott model. The reaction-diffusion dynamic is described by a pair of time and space dependent Partial Differential Equations (PDEs). In this paper, a generalization of the Gray-Scott model by using variable-order fractional differential equations is proposed. The variable-orders were set as smooth functions bounded in (0 , 1 ] and, specifically, the Liouville-Caputo and the Atangana-Baleanu-Caputo fractional derivatives were used to express the time differentiation. In order to find a numerical solution of the proposed model, the finite difference method together with the Adams method were applied. The simulations results showed the chaotic behavior of the proposed model when different variable-orders are applied.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2004TJSAI..19..540K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2004TJSAI..19..540K"><span>An Inter-Personal Information Sharing Model Based on Personalized Recommendations</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kamei, Koji; Funakoshi, Kaname; Akahani, Jun-Ichi; Satoh, Tetsuji</p> <p></p> <p>In this paper, we propose an inter-personal information sharing model among individuals based on personalized recommendations. In the proposed model, we define an information resource as shared between people when both of them consider it important --- not merely when they both possess it. In other words, the model defines the importance of information resources based on personalized recommendations from identifiable acquaintances. The proposed method is based on a collaborative filtering system that focuses on evaluations from identifiable acquaintances. It utilizes both user evaluations for documents and their contents. In other words, each user profile is represented as a matrix of credibility to the other users' evaluations on each domain of interests. We extended the content-based collaborative filtering method to distinguish other users to whom the documents should be recommended. We also applied a concept-based vector space model to represent the domain of interests instead of the previous method which represented them by a term-based vector space model. We introduce a personalized concept-base compiled from each user's information repository to improve the information retrieval in the user's environment. Furthermore, the concept-spaces change from user to user since they reflect the personalities of the users. Because of different concept-spaces, the similarity between a document and a user's interest varies for each user. As a result, a user receives recommendations from other users who have different view points, achieving inter-personal information sharing based on personalized recommendations. This paper also describes an experimental simulation of our information sharing model. In our laboratory, five participants accumulated a personal repository of e-mails and web pages from which they built their own concept-base. Then we estimated the user profiles according to personalized concept-bases and sets of documents which others evaluated. We simulated inter-personal recommendation based on the user profiles and evaluated the performance of the recommendation method by comparing the recommended documents to the result of the content-based collaborative filtering.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29743379','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29743379"><span>A fast simulation method for radiation maps using interpolation in a virtual environment.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Li, Meng-Kun; Liu, Yong-Kuo; Peng, Min-Jun; Xie, Chun-Li; Yang, Li-Qun</p> <p>2018-05-10</p> <p>In nuclear decommissioning, virtual simulation technology is a useful tool to achieve an effective work process by using virtual environments to represent the physical and logical scheme of a real decommissioning project. This technology is cost-saving and time-saving, with the capacity to develop various decommissioning scenarios and reduce the risk of retrofitting. The method utilises a radiation map in a virtual simulation as the basis for the assessment of exposure to a virtual human. In this paper, we propose a fast simulation method using a known radiation source. The method has a unique advantage over point kernel and Monte Carlo methods because it generates the radiation map using interpolation in a virtual environment. The simulation of the radiation map including the calculation and the visualisation were realised using UNITY and MATLAB. The feasibility of the proposed method was tested on a hypothetical case and the results obtained are discussed in this paper.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2009AIPC.1174...83U','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2009AIPC.1174...83U"><span>Competitive Facility Location with Random Demands</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Uno, Takeshi; Katagiri, Hideki; Kato, Kosuke</p> <p>2009-10-01</p> <p>This paper proposes a new location problem of competitive facilities, e.g. shops and stores, with uncertain demands in the plane. By representing the demands for facilities as random variables, the location problem is formulated to a stochastic programming problem, and for finding its solution, three deterministic programming problems: expectation maximizing problem, probability maximizing problem, and satisfying level maximizing problem are considered. After showing that one of their optimal solutions can be found by solving 0-1 programming problems, their solution method is proposed by improving the tabu search algorithm with strategic vibration. Efficiency of the solution method is shown by applying to numerical examples of the facility location problems.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018EntIS..12..196K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018EntIS..12..196K"><span>A proposed configurable approach for recommendation systems via data mining techniques</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Khedr, Ayman E.; Idrees, Amira M.; Hegazy, Abd El-Fatah; El-Shewy, Samir</p> <p>2018-02-01</p> <p>This study presents a configurable approach for recommendations which determines the suitable recommendation method for each field based on the characteristics of its data, the method includes determining the suitable technique for selecting a representative sample of the provided data. Then selecting the suitable feature weighting measure to provide a correct weight for each feature based on its effect on the recommendations. Finally, selecting the suitable algorithm to provide the required recommendations. The proposed configurable approach could be applied on different domains. The experiments have revealed that the approach is able to provide recommendations with only 0.89 error rate percentage.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4883387','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4883387"><span>Stability-Aware Geographic Routing in Energy Harvesting Wireless Sensor Networks</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Hieu, Tran Dinh; Dung, Le The; Kim, Byung-Seo</p> <p>2016-01-01</p> <p>A new generation of wireless sensor networks that harvest energy from environmental sources such as solar, vibration, and thermoelectric to power sensor nodes is emerging to solve the problem of energy limitation. Based on the photo-voltaic model, this research proposes a stability-aware geographic routing for reliable data transmissions in energy-harvesting wireless sensor networks (EH-WSNs) to provide a reliable routes selection method and potentially achieve an unlimited network lifetime. Specifically, the influences of link quality, represented by the estimated packet reception rate, on network performance is investigated. Simulation results show that the proposed method outperforms an energy-harvesting-aware method in terms of energy consumption, the average number of hops, and the packet delivery ratio. PMID:27187414</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2006EJASP2007..274N','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2006EJASP2007..274N"><span>gpICA: A Novel Nonlinear ICA Algorithm Using Geometric Linearization</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Nguyen, Thang Viet; Patra, Jagdish Chandra; Emmanuel, Sabu</p> <p>2006-12-01</p> <p>A new geometric approach for nonlinear independent component analysis (ICA) is presented in this paper. Nonlinear environment is modeled by the popular post nonlinear (PNL) scheme. To eliminate the nonlinearity in the observed signals, a novel linearizing method named as geometric post nonlinear ICA (gpICA) is introduced. Thereafter, a basic linear ICA is applied on these linearized signals to estimate the unknown sources. The proposed method is motivated by the fact that in a multidimensional space, a nonlinear mixture is represented by a nonlinear surface while a linear mixture is represented by a plane, a special form of the surface. Therefore, by geometrically transforming the surface representing a nonlinear mixture into a plane, the mixture can be linearized. Through simulations on different data sets, superior performance of gpICA algorithm has been shown with respect to other algorithms.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018JIEID..99..165B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018JIEID..99..165B"><span>Underground Mining Method Selection Using WPM and PROMETHEE</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Balusa, Bhanu Chander; Singam, Jayanthu</p> <p>2018-04-01</p> <p>The aim of this paper is to represent the solution to the problem of selecting suitable underground mining method for the mining industry. It is achieved by using two multi-attribute decision making techniques. These two techniques are weighted product method (WPM) and preference ranking organization method for enrichment evaluation (PROMETHEE). In this paper, analytic hierarchy process is used for weight's calculation of the attributes (i.e. parameters which are used in this paper). Mining method selection depends on physical parameters, mechanical parameters, economical parameters and technical parameters. WPM and PROMETHEE techniques have the ability to consider the relationship between the parameters and mining methods. The proposed techniques give higher accuracy and faster computation capability when compared with other decision making techniques. The proposed techniques are presented to determine the effective mining method for bauxite mine. The results of these techniques are compared with methods used in the earlier research works. The results show, conventional cut and fill method is the most suitable mining method.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23681992','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23681992"><span>Discovering motion primitives for unsupervised grouping and one-shot learning of human actions, gestures, and expressions.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Yang, Yang; Saleemi, Imran; Shah, Mubarak</p> <p>2013-07-01</p> <p>This paper proposes a novel representation of articulated human actions and gestures and facial expressions. The main goals of the proposed approach are: 1) to enable recognition using very few examples, i.e., one or k-shot learning, and 2) meaningful organization of unlabeled datasets by unsupervised clustering. Our proposed representation is obtained by automatically discovering high-level subactions or motion primitives, by hierarchical clustering of observed optical flow in four-dimensional, spatial, and motion flow space. The completely unsupervised proposed method, in contrast to state-of-the-art representations like bag of video words, provides a meaningful representation conducive to visual interpretation and textual labeling. Each primitive action depicts an atomic subaction, like directional motion of limb or torso, and is represented by a mixture of four-dimensional Gaussian distributions. For one--shot and k-shot learning, the sequence of primitive labels discovered in a test video are labeled using KL divergence, and can then be represented as a string and matched against similar strings of training videos. The same sequence can also be collapsed into a histogram of primitives or be used to learn a Hidden Markov model to represent classes. We have performed extensive experiments on recognition by one and k-shot learning as well as unsupervised action clustering on six human actions and gesture datasets, a composite dataset, and a database of facial expressions. These experiments confirm the validity and discriminative nature of the proposed representation.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27365093','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27365093"><span>Medication dispensing errors in Palestinian community pharmacy practice: a formal consensus using the Delphi technique.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Shawahna, Ramzi; Haddad, Aseel; Khawaja, Baraa; Raie, Rand; Zaneen, Sireen; Edais, Tasneem</p> <p>2016-10-01</p> <p>Background Medication dispensing errors (MDEs) are frequent in community pharmacy practice. A definition of MDEs and scenarios representing MDE situations in Palestinian community pharmacy practice were not previously approached using formal consensus techniques. Objective This study was conducted to achieve consensus on a definition of MDEs and a wide range of scenarios that should or should not be considered as MDEs in Palestinian community pharmacy practice by a panel of community pharmacists. Setting Community pharmacy practice in Palestine. Method This was a descriptive study using the Delphi technique. A panel of fifty community pharmacists was recruited from different geographical locations of the West Bank of Palestine. A three round Delphi technique was followed to achieve consensus on a proposed definition of MDEs and 83 different scenarios representing potential MDEs using a nine-point scale. Main outcome measure Agreement or disagreement of a panel of community pharmacists on a proposed definition of MDEs and a series of scenarios representing potential MDEs. Results In the first Delphi round, views of key contact community pharmacists on MDEs were explored and situations representing potential MDEs were collected. In the second Delphi round, consensus was achieved to accept the proposed definition and to include 49 (59 %) of the 83 proposed scenarios as MDEs. In the third Delphi round, consensus was achieved to include further 13 (15.7 %) scenarios as MDEs, exclude 9 (10.8 %) scenarios and the rest of 12 (14.5 %) scenarios were considered equivocal based on the opinions of the panelists. Conclusion Consensus on a definition of MDEs and scenarios representing MDE situations in Palestinian community pharmacy practice was achieved using a formal consensus technique. The use of consensual definitions and scenarios representing MDE situations in community pharmacy practice might minimize methodological variations and their significant effects on the number and rate of MDEs reported in different studies.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5728560','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5728560"><span>MADM-based smart parking guidance algorithm</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Li, Bo; Pei, Yijian; Wu, Hao; Huang, Dijiang</p> <p>2017-01-01</p> <p>In smart parking environments, how to choose suitable parking facilities with various attributes to satisfy certain criteria is an important decision issue. Based on the multiple attributes decision making (MADM) theory, this study proposed a smart parking guidance algorithm by considering three representative decision factors (i.e., walk duration, parking fee, and the number of vacant parking spaces) and various preferences of drivers. In this paper, the expected number of vacant parking spaces is regarded as an important attribute to reflect the difficulty degree of finding available parking spaces, and a queueing theory-based theoretical method was proposed to estimate this expected number for candidate parking facilities with different capacities, arrival rates, and service rates. The effectiveness of the MADM-based parking guidance algorithm was investigated and compared with a blind search-based approach in comprehensive scenarios with various distributions of parking facilities, traffic intensities, and user preferences. Experimental results show that the proposed MADM-based algorithm is effective to choose suitable parking resources to satisfy users’ preferences. Furthermore, it has also been observed that this newly proposed Markov Chain-based availability attribute is more effective to represent the availability of parking spaces than the arrival rate-based availability attribute proposed in existing research. PMID:29236698</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27314023','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27314023"><span>Using the Relevance Vector Machine Model Combined with Local Phase Quantization to Predict Protein-Protein Interactions from Protein Sequences.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>An, Ji-Yong; Meng, Fan-Rong; You, Zhu-Hong; Fang, Yu-Hong; Zhao, Yu-Jun; Zhang, Ming</p> <p>2016-01-01</p> <p>We propose a novel computational method known as RVM-LPQ that combines the Relevance Vector Machine (RVM) model and Local Phase Quantization (LPQ) to predict PPIs from protein sequences. The main improvements are the results of representing protein sequences using the LPQ feature representation on a Position Specific Scoring Matrix (PSSM), reducing the influence of noise using a Principal Component Analysis (PCA), and using a Relevance Vector Machine (RVM) based classifier. We perform 5-fold cross-validation experiments on Yeast and Human datasets, and we achieve very high accuracies of 92.65% and 97.62%, respectively, which is significantly better than previous works. To further evaluate the proposed method, we compare it with the state-of-the-art support vector machine (SVM) classifier on the Yeast dataset. The experimental results demonstrate that our RVM-LPQ method is obviously better than the SVM-based method. The promising experimental results show the efficiency and simplicity of the proposed method, which can be an automatic decision support tool for future proteomics research.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26243831','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26243831"><span>Mining gene link information for survival pathway hunting.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Jing, Gao-Jian; Zhang, Zirui; Wang, Hong-Qiang; Zheng, Hong-Mei</p> <p>2015-08-01</p> <p>This study proposes a gene link-based method for survival time-related pathway hunting. In this method, the authors incorporate gene link information to estimate how a pathway is associated with cancer patient's survival time. Specifically, a gene link-based Cox proportional hazard model (Link-Cox) is established, in which two linked genes are considered together to represent a link variable and the association of the link with survival time is assessed using Cox proportional hazard model. On the basis of the Link-Cox model, the authors formulate a new statistic for measuring the association of a pathway with survival time of cancer patients, referred to as pathway survival score (PSS), by summarising survival significance over all the gene links in the pathway, and devise a permutation test to test the significance of an observed PSS. To evaluate the proposed method, the authors applied it to simulation data and two publicly available real-world gene expression data sets. Extensive comparisons with previous methods show the effectiveness and efficiency of the proposed method for survival pathway hunting.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/22581332-level-set-method-cupping-artifact-correction-cone-beam-ct','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/22581332-level-set-method-cupping-artifact-correction-cone-beam-ct"><span>A level set method for cupping artifact correction in cone-beam CT</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Xie, Shipeng; Li, Haibo; Ge, Qi</p> <p>2015-08-15</p> <p>Purpose: To reduce cupping artifacts and improve the contrast-to-noise ratio in cone-beam computed tomography (CBCT). Methods: A level set method is proposed to reduce cupping artifacts in the reconstructed image of CBCT. The authors derive a local intensity clustering property of the CBCT image and define a local clustering criterion function of the image intensities in a neighborhood of each point. This criterion function defines an energy in terms of the level set functions, which represent a segmentation result and the cupping artifacts. The cupping artifacts are estimated as a result of minimizing this energy. Results: The cupping artifacts inmore » CBCT are reduced by an average of 90%. The results indicate that the level set-based algorithm is practical and effective for reducing the cupping artifacts and preserving the quality of the reconstructed image. Conclusions: The proposed method focuses on the reconstructed image without requiring any additional physical equipment, is easily implemented, and provides cupping correction through a single-scan acquisition. The experimental results demonstrate that the proposed method successfully reduces the cupping artifacts.« less</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_10");'>10</a></li> <li><a href="#" onclick='return showDiv("page_11");'>11</a></li> <li class="active"><span>12</span></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_12 --> <div id="page_13" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_11");'>11</a></li> <li><a href="#" onclick='return showDiv("page_12");'>12</a></li> <li class="active"><span>13</span></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="241"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015SPIE.9414E..3LN','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015SPIE.9414E..3LN"><span>Automated torso organ segmentation from 3D CT images using structured perceptron and dual decomposition</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Nimura, Yukitaka; Hayashi, Yuichiro; Kitasaka, Takayuki; Mori, Kensaku</p> <p>2015-03-01</p> <p>This paper presents a method for torso organ segmentation from abdominal CT images using structured perceptron and dual decomposition. A lot of methods have been proposed to enable automated extraction of organ regions from volumetric medical images. However, it is necessary to adjust empirical parameters of them to obtain precise organ regions. This paper proposes an organ segmentation method using structured output learning. Our method utilizes a graphical model and binary features which represent the relationship between voxel intensities and organ labels. Also we optimize the weights of the graphical model by structured perceptron and estimate the best organ label for a given image by dynamic programming and dual decomposition. The experimental result revealed that the proposed method can extract organ regions automatically using structured output learning. The error of organ label estimation was 4.4%. The DICE coefficients of left lung, right lung, heart, liver, spleen, pancreas, left kidney, right kidney, and gallbladder were 0.91, 0.95, 0.77, 0.81, 0.74, 0.08, 0.83, 0.84, and 0.03, respectively.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017ISPAr52W4..233O','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017ISPAr52W4..233O"><span>Feature Vector Construction Method for IRIS Recognition</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Odinokikh, G.; Fartukov, A.; Korobkin, M.; Yoo, J.</p> <p>2017-05-01</p> <p>One of the basic stages of iris recognition pipeline is iris feature vector construction procedure. The procedure represents the extraction of iris texture information relevant to its subsequent comparison. Thorough investigation of feature vectors obtained from iris showed that not all the vector elements are equally relevant. There are two characteristics which determine the vector element utility: fragility and discriminability. Conventional iris feature extraction methods consider the concept of fragility as the feature vector instability without respect to the nature of such instability appearance. This work separates sources of the instability into natural and encodinginduced which helps deeply investigate each source of instability independently. According to the separation concept, a novel approach of iris feature vector construction is proposed. The approach consists of two steps: iris feature extraction using Gabor filtering with optimal parameters and quantization with separated preliminary optimized fragility thresholds. The proposed method has been tested on two different datasets of iris images captured under changing environmental conditions. The testing results show that the proposed method surpasses all the methods considered as a prior art by recognition accuracy on both datasets.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014AcSpA.132..239S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014AcSpA.132..239S"><span>A comparative study of progressive versus successive spectrophotometric resolution techniques applied for pharmaceutical ternary mixtures</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Saleh, Sarah S.; Lotfy, Hayam M.; Hassan, Nagiba Y.; Salem, Hesham</p> <p>2014-11-01</p> <p>This work represents a comparative study of a novel progressive spectrophotometric resolution technique namely, amplitude center method (ACM), versus the well-established successive spectrophotometric resolution techniques namely; successive derivative subtraction (SDS); successive derivative of ratio spectra (SDR) and mean centering of ratio spectra (MCR). All the proposed spectrophotometric techniques consist of several consecutive steps utilizing ratio and/or derivative spectra. The novel amplitude center method (ACM) can be used for the determination of ternary mixtures using single divisor where the concentrations of the components are determined through progressive manipulation performed on the same ratio spectrum. Those methods were applied for the analysis of the ternary mixture of chloramphenicol (CHL), dexamethasone sodium phosphate (DXM) and tetryzoline hydrochloride (TZH) in eye drops in the presence of benzalkonium chloride as a preservative. The proposed methods were checked using laboratory-prepared mixtures and were successfully applied for the analysis of pharmaceutical formulation containing the cited drugs. The proposed methods were validated according to the ICH guidelines. A comparative study was conducted between those methods regarding simplicity, limitation and sensitivity. The obtained results were statistically compared with those obtained from the official BP methods, showing no significant difference with respect to accuracy and precision.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28987711','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28987711"><span>Optimization of digital image processing to determine quantum dots' height and density from atomic force microscopy.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Ruiz, J E; Paciornik, S; Pinto, L D; Ptak, F; Pires, M P; Souza, P L</p> <p>2018-01-01</p> <p>An optimized method of digital image processing to interpret quantum dots' height measurements obtained by atomic force microscopy is presented. The method was developed by combining well-known digital image processing techniques and particle recognition algorithms. The properties of quantum dot structures strongly depend on dots' height, among other features. Determination of their height is sensitive to small variations in their digital image processing parameters, which can generate misleading results. Comparing the results obtained with two image processing techniques - a conventional method and the new method proposed herein - with the data obtained by determining the height of quantum dots one by one within a fixed area, showed that the optimized method leads to more accurate results. Moreover, the log-normal distribution, which is often used to represent natural processes, shows a better fit to the quantum dots' height histogram obtained with the proposed method. Finally, the quantum dots' height obtained were used to calculate the predicted photoluminescence peak energies which were compared with the experimental data. Again, a better match was observed when using the proposed method to evaluate the quantum dots' height. Copyright © 2017 Elsevier B.V. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19890054460&hterms=rational+better&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D20%26Ntt%3Drational%2Bbetter','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19890054460&hterms=rational+better&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D20%26Ntt%3Drational%2Bbetter"><span>Towards a rational theory for CFD global stability</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Baker, A. J.; Iannelli, G. S.</p> <p>1989-01-01</p> <p>The fundamental notion of the consistent stability of semidiscrete analogues of evolution PDEs is explored. Lyapunov's direct method is used to develop CFD semidiscrete algorithms which yield the TVD constraint as a special case. A general formula for supplying dissipation parameters for arbitrary multidimensional conservation law systems is proposed. The reliability of the method is demonstrated by the results of two numerical tests for representative Euler shocked flows.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018SPIE10615E..4FV','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018SPIE10615E..4FV"><span>A novel method of the image processing on irregular triangular meshes</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Vishnyakov, Sergey; Pekhterev, Vitaliy; Sokolova, Elizaveta</p> <p>2018-04-01</p> <p>The paper describes a novel method of the image processing based on irregular triangular meshes implementation. The triangular mesh is adaptive to the image content, least mean square linear approximation is proposed for the basic interpolation within the triangle. It is proposed to use triangular numbers to simplify using of the local (barycentric) coordinates for the further analysis - triangular element of the initial irregular mesh is to be represented through the set of the four equilateral triangles. This allows to use fast and simple pixels indexing in local coordinates, e.g. "for" or "while" loops for access to the pixels. Moreover, representation proposed allows to use discrete cosine transform of the simple "rectangular" symmetric form without additional pixels reordering (as it is used for shape-adaptive DCT forms). Furthermore, this approach leads to the simple form of the wavelet transform on triangular mesh. The results of the method application are presented. It is shown that advantage of the method proposed is a combination of the flexibility of the image-adaptive irregular meshes with the simple form of the pixel indexing in local triangular coordinates and the using of the common forms of the discrete transforms for triangular meshes. Method described is proposed for the image compression, pattern recognition, image quality improvement, image search and indexing. It also may be used as a part of video coding (intra-frame or inter-frame coding, motion detection).</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=types+AND+company&pg=5&id=EJ1142471','ERIC'); return false;" href="https://eric.ed.gov/?q=types+AND+company&pg=5&id=EJ1142471"><span>The Design Process of Corporate Universities: A Stakeholder Approach</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Patrucco, Andrea Stefano; Pellizzoni, Elena; Buganza, Tommaso</p> <p>2017-01-01</p> <p>Purpose: Corporate universities (CUs) have been experiencing tremendous growth during the past years and can represent a real driving force for structured organizations. The paper aims to define the process of CU design shaped around company strategy. For each step, the authors propose specific roles, activities and methods.…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.ars.usda.gov/research/publications/publication/?seqNo115=351539','TEKTRAN'); return false;" href="http://www.ars.usda.gov/research/publications/publication/?seqNo115=351539"><span>Hybrid finite volume-finite element model for the numerical analysis of furrow irrigation and fertigation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ars.usda.gov/research/publications/find-a-publication/">USDA-ARS?s Scientific Manuscript database</a></p> <p></p> <p></p> <p>Although slowly abandoned in developed countries, furrow irrigation systems continue to be a dominant irrigation method in developing countries. Numerical models represent powerful tools to assess irrigation and fertigation efficiency. While several models have been proposed in the past, the develop...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://files.eric.ed.gov/fulltext/ED046971.pdf','ERIC'); return false;" href="http://files.eric.ed.gov/fulltext/ED046971.pdf"><span>Studies of Horst's Procedure for Binary Data Analysis.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Gray, William M.; Hofmann, Richard J.</p> <p></p> <p>Most responses to educational and psychological test items may be represented in binary form. However, such dichotomously scored items present special problems when an analysis of correlational interrelationships among the items is attempted. Two general methods of analyzing binary data are proposed by Horst to partial out the effects of…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.gpo.gov/fdsys/pkg/CFR-2011-title48-vol5/pdf/CFR-2011-title48-vol5-sec715-303-70.pdf','CFR2011'); return false;" href="https://www.gpo.gov/fdsys/pkg/CFR-2011-title48-vol5/pdf/CFR-2011-title48-vol5-sec715-303-70.pdf"><span>48 CFR 715.303-70 - Responsibilities of USAID evaluation committees.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.gpo.gov/fdsys/browse/collectionCfr.action?selectedYearFrom=2011&page.go=Go">Code of Federal Regulations, 2011 CFR</a></p> <p></p> <p>2011-10-01</p> <p>... INTERNATIONAL DEVELOPMENT CONTRACTING METHODS AND CONTRACT TYPES CONTRACTING BY NEGOTIATION Source Selection 715..., a representative of the contracting office (who shall be a non-voting member of the committee), and... contracting officer will receive all proposals and provide to the chair a listing and copies of the technical...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26974795','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26974795"><span>Three-dimensional ray tracing in spherical and elliptical generalized Luneburg lenses for application in the human eye lens.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Gómez-Correa, J E; Coello, V; Garza-Rivera, A; Puente, N P; Chávez-Cerda, S</p> <p>2016-03-10</p> <p>Ray tracing in spherical Luneburg lenses has always been represented in 2D. All propagation planes in a 3D spherical Luneburg lens generate the same ray tracing, due to its radial symmetry. A geometry without radial symmetry generates a different ray tracing. For this reason, a new ray tracing method in 3D through spherical and elliptical Luneburg lenses using 2D methods is proposed. The physics of the propagation is shown here, which allows us to make a ray tracing associated with a vortex beam. A 3D ray tracing in a composite modified Luneburg lens that represents the human eye lens is also presented.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4119692','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4119692"><span>Upper Limb Posture Estimation in Robotic and Virtual Reality-Based Rehabilitation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Cortés, Camilo; Ardanza, Aitor; Molina-Rueda, F.; Cuesta-Gómez, A.; Ruiz, Oscar E.</p> <p>2014-01-01</p> <p>New motor rehabilitation therapies include virtual reality (VR) and robotic technologies. In limb rehabilitation, limb posture is required to (1) provide a limb realistic representation in VR games and (2) assess the patient improvement. When exoskeleton devices are used in the therapy, the measurements of their joint angles cannot be directly used to represent the posture of the patient limb, since the human and exoskeleton kinematic models differ. In response to this shortcoming, we propose a method to estimate the posture of the human limb attached to the exoskeleton. We use the exoskeleton joint angles measurements and the constraints of the exoskeleton on the limb to estimate the human limb joints angles. This paper presents (a) the mathematical formulation and solution to the problem, (b) the implementation of the proposed solution on a commercial exoskeleton system for the upper limb rehabilitation, (c) its integration into a rehabilitation VR game platform, and (d) the quantitative assessment of the method during elbow and wrist analytic training. Results show that this method properly estimates the limb posture to (i) animate avatars that represent the patient in VR games and (ii) obtain kinematic data for the patient assessment during elbow and wrist analytic rehabilitation. PMID:25110698</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4899318','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4899318"><span>Signal Sampling for Efficient Sparse Representation of Resting State FMRI Data</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Ge, Bao; Makkie, Milad; Wang, Jin; Zhao, Shijie; Jiang, Xi; Li, Xiang; Lv, Jinglei; Zhang, Shu; Zhang, Wei; Han, Junwei; Guo, Lei; Liu, Tianming</p> <p>2015-01-01</p> <p>As the size of brain imaging data such as fMRI grows explosively, it provides us with unprecedented and abundant information about the brain. How to reduce the size of fMRI data but not lose much information becomes a more and more pressing issue. Recent literature studies tried to deal with it by dictionary learning and sparse representation methods, however, their computation complexities are still high, which hampers the wider application of sparse representation method to large scale fMRI datasets. To effectively address this problem, this work proposes to represent resting state fMRI (rs-fMRI) signals of a whole brain via a statistical sampling based sparse representation. First we sampled the whole brain’s signals via different sampling methods, then the sampled signals were aggregate into an input data matrix to learn a dictionary, finally this dictionary was used to sparsely represent the whole brain’s signals and identify the resting state networks. Comparative experiments demonstrate that the proposed signal sampling framework can speed-up by ten times in reconstructing concurrent brain networks without losing much information. The experiments on the 1000 Functional Connectomes Project further demonstrate its effectiveness and superiority. PMID:26646924</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016JEI....25e3033L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016JEI....25e3033L"><span>Blind motion image deblurring using nonconvex higher-order total variation model</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Li, Weihong; Chen, Rui; Xu, Shangwen; Gong, Weiguo</p> <p>2016-09-01</p> <p>We propose a nonconvex higher-order total variation (TV) method for blind motion image deblurring. First, we introduce a nonconvex higher-order TV differential operator to define a new model of the blind motion image deblurring, which can effectively eliminate the staircase effect of the deblurred image; meanwhile, we employ an image sparse prior to improve the edge recovery quality. Second, to improve the accuracy of the estimated motion blur kernel, we use L1 norm and H1 norm as the blur kernel regularization term, considering the sparsity and smoothing of the motion blur kernel. Third, because it is difficult to solve the numerically computational complexity problem of the proposed model owing to the intrinsic nonconvexity, we propose a binary iterative strategy, which incorporates a reweighted minimization approximating scheme in the outer iteration, and a split Bregman algorithm in the inner iteration. And we also discuss the convergence of the proposed binary iterative strategy. Last, we conduct extensive experiments on both synthetic and real-world degraded images. The results demonstrate that the proposed method outperforms the previous representative methods in both quality of visual perception and quantitative measurement.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/ADA178796','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/ADA178796"><span>An Analysis of Cost Analysis Methods Used during Contract Evaluation and Source Selection in Government Contracting.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.dtic.mil/">DTIC Science & Technology</a></p> <p></p> <p>1986-12-01</p> <p>optimal value can be stated as, Marginal Productivity of Marginal Productivity of Good A Good B " Price of Good A Price of Good B This...contractor proposed production costs could be used. _11 i4 W Vi..:. II. CONTRACT PROPOSAL EVALUATION A. PRICE ANALYSIS Price analysis, in its broadest sense...enters the market with a supply function represented by line S2, then the new price will be reestablished at price OP2 and quantity OQ2. Price</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4478728','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4478728"><span>On the evaluation of segmentation editing tools</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Heckel, Frank; Moltz, Jan H.; Meine, Hans; Geisler, Benjamin; Kießling, Andreas; D’Anastasi, Melvin; dos Santos, Daniel Pinto; Theruvath, Ashok Joseph; Hahn, Horst K.</p> <p>2014-01-01</p> <p>Abstract. Efficient segmentation editing tools are important components in the segmentation process, as no automatic methods exist that always generate sufficient results. Evaluating segmentation editing algorithms is challenging, because their quality depends on the user’s subjective impression. So far, no established methods for an objective, comprehensive evaluation of such tools exist and, particularly, intermediate segmentation results are not taken into account. We discuss the evaluation of editing algorithms in the context of tumor segmentation in computed tomography. We propose a rating scheme to qualitatively measure the accuracy and efficiency of editing tools in user studies. In order to objectively summarize the overall quality, we propose two scores based on the subjective rating and the quantified segmentation quality over time. Finally, a simulation-based evaluation approach is discussed, which allows a more reproducible evaluation without the need for human input. This automated evaluation complements user studies, allowing a more convincing evaluation, particularly during development, where frequent user studies are not possible. The proposed methods have been used to evaluate two dedicated editing algorithms on 131 representative tumor segmentations. We show how the comparison of editing algorithms benefits from the proposed methods. Our results also show the correlation of the suggested quality score with the qualitative ratings. PMID:26158063</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/1339051-economic-analysis-optimal-sizing-behind-meter-battery-storage','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/1339051-economic-analysis-optimal-sizing-behind-meter-battery-storage"><span>Economic Analysis and Optimal Sizing for behind-the-meter Battery Storage</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Wu, Di; Kintner-Meyer, Michael CW; Yang, Tao</p> <p></p> <p>This paper proposes methods to estimate the potential benefits and determine the optimal energy and power capacity for behind-the-meter BSS. In the proposed method, a linear programming is first formulated only using typical load profiles, energy/demand charge rates, and a set of battery parameters to determine the maximum saving in electric energy cost. The optimization formulation is then adapted to include battery cost as a function of its power and energy capacity in order to capture the trade-off between benefits and cost, and therefore to determine the most economic battery size. Using the proposed methods, economic analysis and optimal sizingmore » have been performed for a few commercial buildings and utility rate structures that are representative of those found in the various regions of the Continental United States. The key factors that affect the economic benefits and optimal size have been identified. The proposed methods and case study results cannot only help commercial and industrial customers or battery vendors to evaluate and size the storage system for behind-the-meter application, but can also assist utilities and policy makers to design electricity rate or subsidies to promote the development of energy storage.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012IJTPE.132....2K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012IJTPE.132....2K"><span>Evaluation of Foreign Investment in Power Plants using Real Options</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kato, Moritoshi; Zhou, Yicheng</p> <p></p> <p>This paper proposes new methods for evaluating foreign investment in power plants under market uncertainty using a real options approach. We suppose a thermal power plant project in a deregulated electricity market. One of our proposed methods is that we calculate the cash flow generated by the project in a reference year using actual market data to incorporate periodic characteristics of energy prices into a yearly cash flow model. We make the stochastic yearly cash flow model with the initial value which is the cash flow in the reference year, and certain trend and volatility. Then we calculate the real options value (ROV) of the project which has abandonment options using the yearly cash flow model. Another our proposed method is that we evaluate foreign currency/domestic currency exchange rate risk by representing ROV in foreign currency as yearly pay off and exchanging it to ROV in domestic currency using a stochastic exchange rate model. We analyze the effect of the heat rate and operation and maintenance costs of the power plant on ROV, and evaluate exchange rate risk through numerical examples. Our proposed method will be useful for the risk management of foreign investment in power plants.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5487008','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5487008"><span>Accelerated High-Dimensional MR Imaging with Sparse Sampling Using Low-Rank Tensors</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>He, Jingfei; Liu, Qiegen; Christodoulou, Anthony G.; Ma, Chao; Lam, Fan</p> <p>2017-01-01</p> <p>High-dimensional MR imaging often requires long data acquisition time, thereby limiting its practical applications. This paper presents a low-rank tensor based method for accelerated high-dimensional MR imaging using sparse sampling. This method represents high-dimensional images as low-rank tensors (or partially separable functions) and uses this mathematical structure for sparse sampling of the data space and for image reconstruction from highly undersampled data. More specifically, the proposed method acquires two datasets with complementary sampling patterns, one for subspace estimation and the other for image reconstruction; image reconstruction from highly undersampled data is accomplished by fitting the measured data with a sparsity constraint on the core tensor and a group sparsity constraint on the spatial coefficients jointly using the alternating direction method of multipliers. The usefulness of the proposed method is demonstrated in MRI applications; it may also have applications beyond MRI. PMID:27093543</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3522947','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3522947"><span>Finger Vein Recognition Based on Local Directional Code</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Meng, Xianjing; Yang, Gongping; Yin, Yilong; Xiao, Rongyang</p> <p>2012-01-01</p> <p>Finger vein patterns are considered as one of the most promising biometric authentication methods for its security and convenience. Most of the current available finger vein recognition methods utilize features from a segmented blood vessel network. As an improperly segmented network may degrade the recognition accuracy, binary pattern based methods are proposed, such as Local Binary Pattern (LBP), Local Derivative Pattern (LDP) and Local Line Binary Pattern (LLBP). However, the rich directional information hidden in the finger vein pattern has not been fully exploited by the existing local patterns. Inspired by the Webber Local Descriptor (WLD), this paper represents a new direction based local descriptor called Local Directional Code (LDC) and applies it to finger vein recognition. In LDC, the local gradient orientation information is coded as an octonary decimal number. Experimental results show that the proposed method using LDC achieves better performance than methods using LLBP. PMID:23202194</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_11");'>11</a></li> <li><a href="#" onclick='return showDiv("page_12");'>12</a></li> <li class="active"><span>13</span></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_13 --> <div id="page_14" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_12");'>12</a></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li class="active"><span>14</span></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="261"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23202194','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23202194"><span>Finger vein recognition based on local directional code.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Meng, Xianjing; Yang, Gongping; Yin, Yilong; Xiao, Rongyang</p> <p>2012-11-05</p> <p>Finger vein patterns are considered as one of the most promising biometric authentication methods for its security and convenience. Most of the current available finger vein recognition methods utilize features from a segmented blood vessel network. As an improperly segmented network may degrade the recognition accuracy, binary pattern based methods are proposed, such as Local Binary Pattern (LBP), Local Derivative Pattern (LDP) and Local Line Binary Pattern (LLBP). However, the rich directional information hidden in the finger vein pattern has not been fully exploited by the existing local patterns. Inspired by the Webber Local Descriptor (WLD), this paper represents a new direction based local descriptor called Local Directional Code (LDC) and applies it to finger vein recognition. In LDC, the local gradient orientation information is coded as an octonary decimal number. Experimental results show that the proposed method using LDC achieves better performance than methods using LLBP.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013EntIS...7..569A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013EntIS...7..569A"><span>A method for determining customer requirement weights based on TFMF and TLR</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Ai, Qingsong; Shu, Ting; Liu, Quan; Zhou, Zude; Xiao, Zheng</p> <p>2013-11-01</p> <p>'Customer requirements' (CRs) management plays an important role in enterprise systems (ESs) by processing customer-focused information. Quality function deployment (QFD) is one of the main CRs analysis methods. Because CR weights are crucial for the input of QFD, we developed a method for determining CR weights based on trapezoidal fuzzy membership function (TFMF) and 2-tuple linguistic representation (TLR). To improve the accuracy of CR weights, we propose to apply TFMF to describe CR weights so that they can be appropriately represented. Because the fuzzy logic is not capable of aggregating information without loss, TLR model is adopted as well. We first describe the basic concepts of TFMF and TLR and then introduce an approach to compute CR weights. Finally, an example is provided to explain and verify the proposed method.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016MSSP...70..161H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016MSSP...70..161H"><span>Autocorrelation-based time synchronous averaging for condition monitoring of planetary gearboxes in wind turbines</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Ha, Jong M.; Youn, Byeng D.; Oh, Hyunseok; Han, Bongtae; Jung, Yoongho; Park, Jungho</p> <p>2016-03-01</p> <p>We propose autocorrelation-based time synchronous averaging (ATSA) to cope with the challenges associated with the current practice of time synchronous averaging (TSA) for planet gears in planetary gearboxes of wind turbine (WT). An autocorrelation function that represents physical interactions between the ring, sun, and planet gears in the gearbox is utilized to define the optimal shape and range of the window function for TSA using actual kinetic responses. The proposed ATSA offers two distinctive features: (1) data-efficient TSA processing and (2) prevention of signal distortion during the TSA process. It is thus expected that an order analysis with the ATSA signals significantly improves the efficiency and accuracy in fault diagnostics of planet gears in planetary gearboxes. Two case studies are presented to demonstrate the effectiveness of the proposed method: an analytical signal from a simulation and a signal measured from a 2 kW WT testbed. It can be concluded from the results that the proposed method outperforms conventional TSA methods in condition monitoring of the planetary gearbox when the amount of available stationary data is limited.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29220317','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29220317"><span>No-Reference Image Quality Assessment by Wide-Perceptual-Domain Scorer Ensemble Method.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Liu, Tsung-Jung; Liu, Kuan-Hsien</p> <p>2018-03-01</p> <p>A no-reference (NR) learning-based approach to assess image quality is presented in this paper. The devised features are extracted from wide perceptual domains, including brightness, contrast, color, distortion, and texture. These features are used to train a model (scorer) which can predict scores. The scorer selection algorithms are utilized to help simplify the proposed system. In the final stage, the ensemble method is used to combine the prediction results from selected scorers. Two multiple-scale versions of the proposed approach are also presented along with the single-scale one. They turn out to have better performances than the original single-scale method. Because of having features from five different domains at multiple image scales and using the outputs (scores) from selected score prediction models as features for multi-scale or cross-scale fusion (i.e., ensemble), the proposed NR image quality assessment models are robust with respect to more than 24 image distortion types. They also can be used on the evaluation of images with authentic distortions. The extensive experiments on three well-known and representative databases confirm the performance robustness of our proposed model.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018IJASS.tmp...26Z','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018IJASS.tmp...26Z"><span>Waypoints Following Guidance for Surface-to-Surface Missiles</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Zhou, Hao; Khalil, Elsayed M.; Rahman, Tawfiqur; Chen, Wanchun</p> <p>2018-04-01</p> <p>The paper proposes waypoints following guidance law. In this method an optimal trajectory is first generated which is then represented through a set of waypoints that are distributed from the starting point up to the final target point using a polynomial. The guidance system then works by issuing guidance command needed to move from one waypoint to the next one. Here the method is applied for a surface-to-surface missile. The results show that the method is feasible for on-board application.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19920051717&hterms=millwater&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAuthor-Name%26N%3D0%26No%3D10%26Ntt%3Dmillwater','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19920051717&hterms=millwater&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAuthor-Name%26N%3D0%26No%3D10%26Ntt%3Dmillwater"><span>Structural system reliability calculation using a probabilistic fault tree analysis method</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Torng, T. Y.; Wu, Y.-T.; Millwater, H. R.</p> <p>1992-01-01</p> <p>The development of a new probabilistic fault tree analysis (PFTA) method for calculating structural system reliability is summarized. The proposed PFTA procedure includes: developing a fault tree to represent the complex structural system, constructing an approximation function for each bottom event, determining a dominant sampling sequence for all bottom events, and calculating the system reliability using an adaptive importance sampling method. PFTA is suitable for complicated structural problems that require computer-intensive computer calculations. A computer program has been developed to implement the PFTA.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016SPIE.9784E..2KZ','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016SPIE.9784E..2KZ"><span>Lung vessel segmentation in CT images using graph-cuts</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Zhai, Zhiwei; Staring, Marius; Stoel, Berend C.</p> <p>2016-03-01</p> <p>Accurate lung vessel segmentation is an important operation for lung CT analysis. Filters that are based on analyzing the eigenvalues of the Hessian matrix are popular for pulmonary vessel enhancement. However, due to their low response at vessel bifurcations and vessel boundaries, extracting lung vessels by thresholding the vesselness is not sufficiently accurate. Some methods turn to graph-cuts for more accurate segmentation, as it incorporates neighbourhood information. In this work, we propose a new graph-cuts cost function combining appearance and shape, where CT intensity represents appearance and vesselness from a Hessian-based filter represents shape. Due to the amount of voxels in high resolution CT scans, the memory requirement and time consumption for building a graph structure is very high. In order to make the graph representation computationally tractable, those voxels that are considered clearly background are removed from the graph nodes, using a threshold on the vesselness map. The graph structure is then established based on the remaining voxel nodes, source/sink nodes and the neighbourhood relationship of the remaining voxels. Vessels are segmented by minimizing the energy cost function with the graph-cuts optimization framework. We optimized the parameters used in the graph-cuts cost function and evaluated the proposed method with two manually labeled sub-volumes. For independent evaluation, we used 20 CT scans of the VESSEL12 challenge. The evaluation results of the sub-volume data show that the proposed method produced a more accurate vessel segmentation compared to the previous methods, with F1 score 0.76 and 0.69. In the VESSEL12 data-set, our method obtained a competitive performance with an area under the ROC curve of 0.975, especially among the binary submissions.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28117742','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28117742"><span>Object Detection and Classification by Decision-Level Fusion for Intelligent Vehicle Systems.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Oh, Sang-Il; Kang, Hang-Bong</p> <p>2017-01-22</p> <p>To understand driving environments effectively, it is important to achieve accurate detection and classification of objects detected by sensor-based intelligent vehicle systems, which are significantly important tasks. Object detection is performed for the localization of objects, whereas object classification recognizes object classes from detected object regions. For accurate object detection and classification, fusing multiple sensor information into a key component of the representation and perception processes is necessary. In this paper, we propose a new object-detection and classification method using decision-level fusion. We fuse the classification outputs from independent unary classifiers, such as 3D point clouds and image data using a convolutional neural network (CNN). The unary classifiers for the two sensors are the CNN with five layers, which use more than two pre-trained convolutional layers to consider local to global features as data representation. To represent data using convolutional layers, we apply region of interest (ROI) pooling to the outputs of each layer on the object candidate regions generated using object proposal generation to realize color flattening and semantic grouping for charge-coupled device and Light Detection And Ranging (LiDAR) sensors. We evaluate our proposed method on a KITTI benchmark dataset to detect and classify three object classes: cars, pedestrians and cyclists. The evaluation results show that the proposed method achieves better performance than the previous methods. Our proposed method extracted approximately 500 proposals on a 1226 × 370 image, whereas the original selective search method extracted approximately 10 6 × n proposals. We obtained classification performance with 77.72% mean average precision over the entirety of the classes in the moderate detection level of the KITTI benchmark dataset.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5298778','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5298778"><span>Object Detection and Classification by Decision-Level Fusion for Intelligent Vehicle Systems</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Oh, Sang-Il; Kang, Hang-Bong</p> <p>2017-01-01</p> <p>To understand driving environments effectively, it is important to achieve accurate detection and classification of objects detected by sensor-based intelligent vehicle systems, which are significantly important tasks. Object detection is performed for the localization of objects, whereas object classification recognizes object classes from detected object regions. For accurate object detection and classification, fusing multiple sensor information into a key component of the representation and perception processes is necessary. In this paper, we propose a new object-detection and classification method using decision-level fusion. We fuse the classification outputs from independent unary classifiers, such as 3D point clouds and image data using a convolutional neural network (CNN). The unary classifiers for the two sensors are the CNN with five layers, which use more than two pre-trained convolutional layers to consider local to global features as data representation. To represent data using convolutional layers, we apply region of interest (ROI) pooling to the outputs of each layer on the object candidate regions generated using object proposal generation to realize color flattening and semantic grouping for charge-coupled device and Light Detection And Ranging (LiDAR) sensors. We evaluate our proposed method on a KITTI benchmark dataset to detect and classify three object classes: cars, pedestrians and cyclists. The evaluation results show that the proposed method achieves better performance than the previous methods. Our proposed method extracted approximately 500 proposals on a 1226×370 image, whereas the original selective search method extracted approximately 106×n proposals. We obtained classification performance with 77.72% mean average precision over the entirety of the classes in the moderate detection level of the KITTI benchmark dataset. PMID:28117742</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018JPRS..136...26H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018JPRS..136...26H"><span>Recognition of building group patterns in topographic maps based on graph partitioning and random forest</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>He, Xianjin; Zhang, Xinchang; Xin, Qinchuan</p> <p>2018-02-01</p> <p>Recognition of building group patterns (i.e., the arrangement and form exhibited by a collection of buildings at a given mapping scale) is important to the understanding and modeling of geographic space and is hence essential to a wide range of downstream applications such as map generalization. Most of the existing methods develop rigid rules based on the topographic relationships between building pairs to identify building group patterns and thus their applications are often limited. This study proposes a method to identify a variety of building group patterns that allow for map generalization. The method first identifies building group patterns from potential building clusters based on a machine-learning algorithm and further partitions the building clusters with no recognized patterns based on the graph partitioning method. The proposed method is applied to the datasets of three cities that are representative of the complex urban environment in Southern China. Assessment of the results based on the reference data suggests that the proposed method is able to recognize both regular (e.g., the collinear, curvilinear, and rectangular patterns) and irregular (e.g., the L-shaped, H-shaped, and high-density patterns) building group patterns well, given that the correctness values are consistently nearly 90% and the completeness values are all above 91% for three study areas. The proposed method shows promises in automated recognition of building group patterns that allows for map generalization.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5890978','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5890978"><span>Forecasting new product diffusion using both patent citation and web search traffic</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Lee, Won Sang; Choi, Hyo Shin</p> <p>2018-01-01</p> <p>Accurate demand forecasting for new technology products is a key factor in the success of a business. We propose a way to forecasting a new product’s diffusion through technology diffusion and interest diffusion. Technology diffusion and interest diffusion are measured by the volume of patent citations and web search traffic, respectively. We apply the proposed method to forecast the sales of hybrid cars and industrial robots in the US market. The results show that that technology diffusion, as represented by patent citations, can explain long-term sales for hybrid cars and industrial robots. On the other hand, interest diffusion, as represented by web search traffic, can help to improve the predictability of market sales of hybrid cars in the short-term. However, interest diffusion is difficult to explain the sales of industrial robots due to the different market characteristics. Finding indicates our proposed model can relatively well explain the diffusion of consumer goods. PMID:29630616</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017SPIE10057E..0PZ','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017SPIE10057E..0PZ"><span>Application of kernel method in fluorescence molecular tomography</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Zhao, Yue; Baikejiang, Reheman; Li, Changqing</p> <p>2017-02-01</p> <p>Reconstruction of fluorescence molecular tomography (FMT) is an ill-posed inverse problem. Anatomical guidance in the FMT reconstruction can improve FMT reconstruction efficiently. We have developed a kernel method to introduce the anatomical guidance into FMT robustly and easily. The kernel method is from machine learning for pattern analysis and is an efficient way to represent anatomical features. For the finite element method based FMT reconstruction, we calculate a kernel function for each finite element node from an anatomical image, such as a micro-CT image. Then the fluorophore concentration at each node is represented by a kernel coefficient vector and the corresponding kernel function. In the FMT forward model, we have a new system matrix by multiplying the sensitivity matrix with the kernel matrix. Thus, the kernel coefficient vector is the unknown to be reconstructed following a standard iterative reconstruction process. We convert the FMT reconstruction problem into the kernel coefficient reconstruction problem. The desired fluorophore concentration at each node can be calculated accordingly. Numerical simulation studies have demonstrated that the proposed kernel-based algorithm can improve the spatial resolution of the reconstructed FMT images. In the proposed kernel method, the anatomical guidance can be obtained directly from the anatomical image and is included in the forward modeling. One of the advantages is that we do not need to segment the anatomical image for the targets and background.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29670564','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29670564"><span>Biometric and Emotion Identification: An ECG Compression Based Method.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Brás, Susana; Ferreira, Jacqueline H T; Soares, Sandra C; Pinho, Armando J</p> <p>2018-01-01</p> <p>We present an innovative and robust solution to both biometric and emotion identification using the electrocardiogram (ECG). The ECG represents the electrical signal that comes from the contraction of the heart muscles, indirectly representing the flow of blood inside the heart, it is known to convey a key that allows biometric identification. Moreover, due to its relationship with the nervous system, it also varies as a function of the emotional state. The use of information-theoretic data models, associated with data compression algorithms, allowed to effectively compare ECG records and infer the person identity, as well as emotional state at the time of data collection. The proposed method does not require ECG wave delineation or alignment, which reduces preprocessing error. The method is divided into three steps: (1) conversion of the real-valued ECG record into a symbolic time-series, using a quantization process; (2) conditional compression of the symbolic representation of the ECG, using the symbolic ECG records stored in the database as reference; (3) identification of the ECG record class, using a 1-NN (nearest neighbor) classifier. We obtained over 98% of accuracy in biometric identification, whereas in emotion recognition we attained over 90%. Therefore, the method adequately identify the person, and his/her emotion. Also, the proposed method is flexible and may be adapted to different problems, by the alteration of the templates for training the model.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5893853','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5893853"><span>Biometric and Emotion Identification: An ECG Compression Based Method</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Brás, Susana; Ferreira, Jacqueline H. T.; Soares, Sandra C.; Pinho, Armando J.</p> <p>2018-01-01</p> <p>We present an innovative and robust solution to both biometric and emotion identification using the electrocardiogram (ECG). The ECG represents the electrical signal that comes from the contraction of the heart muscles, indirectly representing the flow of blood inside the heart, it is known to convey a key that allows biometric identification. Moreover, due to its relationship with the nervous system, it also varies as a function of the emotional state. The use of information-theoretic data models, associated with data compression algorithms, allowed to effectively compare ECG records and infer the person identity, as well as emotional state at the time of data collection. The proposed method does not require ECG wave delineation or alignment, which reduces preprocessing error. The method is divided into three steps: (1) conversion of the real-valued ECG record into a symbolic time-series, using a quantization process; (2) conditional compression of the symbolic representation of the ECG, using the symbolic ECG records stored in the database as reference; (3) identification of the ECG record class, using a 1-NN (nearest neighbor) classifier. We obtained over 98% of accuracy in biometric identification, whereas in emotion recognition we attained over 90%. Therefore, the method adequately identify the person, and his/her emotion. Also, the proposed method is flexible and may be adapted to different problems, by the alteration of the templates for training the model. PMID:29670564</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2010AIPC.1285...99U','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2010AIPC.1285...99U"><span>Competitive Facility Location with Fuzzy Random Demands</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Uno, Takeshi; Katagiri, Hideki; Kato, Kosuke</p> <p>2010-10-01</p> <p>This paper proposes a new location problem of competitive facilities, e.g. shops, with uncertainty and vagueness including demands for the facilities in a plane. By representing the demands for facilities as fuzzy random variables, the location problem can be formulated as a fuzzy random programming problem. For solving the fuzzy random programming problem, first the α-level sets for fuzzy numbers are used for transforming it to a stochastic programming problem, and secondly, by using their expectations and variances, it can be reformulated to a deterministic programming problem. After showing that one of their optimal solutions can be found by solving 0-1 programming problems, their solution method is proposed by improving the tabu search algorithm with strategic oscillation. The efficiency of the proposed method is shown by applying it to numerical examples of the facility location problems.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016SPIE.9785E..2YD','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016SPIE.9785E..2YD"><span>Classification of pulmonary nodules in lung CT images using shape and texture features</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Dhara, Ashis Kumar; Mukhopadhyay, Sudipta; Dutta, Anirvan; Garg, Mandeep; Khandelwal, Niranjan; Kumar, Prafulla</p> <p>2016-03-01</p> <p>Differentiation of malignant and benign pulmonary nodules is important for prognosis of lung cancer. In this paper, benign and malignant nodules are classified using support vector machine. Several shape-based and texture-based features are used to represent the pulmonary nodules in the feature space. A semi-automated technique is used for nodule segmentation. Relevant features are selected for efficient representation of nodules in the feature space. The proposed scheme and the competing technique are evaluated on a data set of 542 nodules of Lung Image Database Consortium and Image Database Resource Initiative. The nodules with composite rank of malignancy "1","2" are considered as benign and "4","5" are considered as malignant. Area under the receiver operating characteristics curve is 0:9465 for the proposed method. The proposed method outperforms the competing technique.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24954810','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24954810"><span>Centralized PI control for high dimensional multivariable systems based on equivalent transfer function.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Luan, Xiaoli; Chen, Qiang; Liu, Fei</p> <p>2014-09-01</p> <p>This article presents a new scheme to design full matrix controller for high dimensional multivariable processes based on equivalent transfer function (ETF). Differing from existing ETF method, the proposed ETF is derived directly by exploiting the relationship between the equivalent closed-loop transfer function and the inverse of open-loop transfer function. Based on the obtained ETF, the full matrix controller is designed utilizing the existing PI tuning rules. The new proposed ETF model can more accurately represent the original processes. Furthermore, the full matrix centralized controller design method proposed in this paper is applicable to high dimensional multivariable systems with satisfactory performance. Comparison with other multivariable controllers shows that the designed ETF based controller is superior with respect to design-complexity and obtained performance. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28359960','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28359960"><span>Optimization of cell seeding in a 2D bio-scaffold system using computational models.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Ho, Nicholas; Chua, Matthew; Chui, Chee-Kong</p> <p>2017-05-01</p> <p>The cell expansion process is a crucial part of generating cells on a large-scale level in a bioreactor system. Hence, it is important to set operating conditions (e.g. initial cell seeding distribution, culture medium flow rate) to an optimal level. Often, the initial cell seeding distribution factor is neglected and/or overlooked in the design of a bioreactor using conventional seeding distribution methods. This paper proposes a novel seeding distribution method that aims to maximize cell growth and minimize production time/cost. The proposed method utilizes two computational models; the first model represents cell growth patterns whereas the second model determines optimal initial cell seeding positions for adherent cell expansions. Cell growth simulation from the first model demonstrates that the model can be a representation of various cell types with known probabilities. The second model involves a combination of combinatorial optimization, Monte Carlo and concepts of the first model, and is used to design a multi-layer 2D bio-scaffold system that increases cell production efficiency in bioreactor applications. Simulation results have shown that the recommended input configurations obtained from the proposed optimization method are the most optimal configurations. The results have also illustrated the effectiveness of the proposed optimization method. The potential of the proposed seeding distribution method as a useful tool to optimize the cell expansion process in modern bioreactor system applications is highlighted. Copyright © 2017 Elsevier Ltd. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012TJSAI..27..245B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012TJSAI..27..245B"><span>Improving the Accuracy of Attribute Extraction using the Relatedness between Attribute Values</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Bollegala, Danushka; Tani, Naoki; Ishizuka, Mitsuru</p> <p></p> <p>Extracting attribute-values related to entities from web texts is an important step in numerous web related tasks such as information retrieval, information extraction, and entity disambiguation (namesake disambiguation). For example, for a search query that contains a personal name, we can not only return documents that contain that personal name, but if we have attribute-values such as the organization for which that person works, we can also suggest documents that contain information related to that organization, thereby improving the user's search experience. Despite numerous potential applications of attribute extraction, it remains a challenging task due to the inherent noise in web data -- often a single web page contains multiple entities and attributes. We propose a graph-based approach to select the correct attribute-values from a set of candidate attribute-values extracted for a particular entity. First, we build an undirected weighted graph in which, attribute-values are represented by nodes, and the edge that connects two nodes in the graph represents the degree of relatedness between the corresponding attribute-values. Next, we find the maximum spanning tree of this graph that connects exactly one attribute-value for each attribute-type. The proposed method outperforms previously proposed attribute extraction methods on a dataset that contains 5000 web pages.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016SMaS...25j5029S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016SMaS...25j5029S"><span>Periodic reference tracking control approach for smart material actuators with complex hysteretic characteristics</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Sun, Zhiyong; Hao, Lina; Song, Bo; Yang, Ruiguo; Cao, Ruimin; Cheng, Yu</p> <p>2016-10-01</p> <p>Micro/nano positioning technologies have been attractive for decades for their various applications in both industrial and scientific fields. The actuators employed in these technologies are typically smart material actuators, which possess inherent hysteresis that may cause systems behave unexpectedly. Periodic reference tracking capability is fundamental for apparatuses such as scanning probe microscope, which employs smart material actuators to generate periodic scanning motion. However, traditional controller such as PID method cannot guarantee accurate fast periodic scanning motion. To tackle this problem and to conduct practical implementation in digital devices, this paper proposes a novel control method named discrete extended unparallel Prandtl-Ishlinskii model based internal model (d-EUPI-IM) control approach. To tackle modeling uncertainties, the robust d-EUPI-IM control approach is investigated, and the associated sufficient stabilizing conditions are derived. The advantages of the proposed controller are: it is designed and represented in discrete form, thus practical for digital devices implementation; the extended unparallel Prandtl-Ishlinskii model can precisely represent forward/inverse complex hysteretic characteristics, thus can reduce modeling uncertainties and benefits controllers design; in addition, the internal model principle based control module can be utilized as a natural oscillator for tackling periodic references tracking problem. The proposed controller was verified through comparative experiments on a piezoelectric actuator platform, and convincing results have been achieved.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_12");'>12</a></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li class="active"><span>14</span></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_14 --> <div id="page_15" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li class="active"><span>15</span></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="281"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/19129026','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/19129026"><span>Two-Phase chief complaint mapping to the UMLS metathesaurus in Korean electronic medical records.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Kang, Bo-Yeong; Kim, Dae-Won; Kim, Hong-Gee</p> <p>2009-01-01</p> <p>The task of automatically determining the concepts referred to in chief complaint (CC) data from electronic medical records (EMRs) is an essential component of many EMR applications aimed at biosurveillance for disease outbreaks. Previous approaches that have been used for this concept mapping have mainly relied on term-level matching, whereby the medical terms in the raw text and their synonyms are matched with concepts in a terminology database. These previous approaches, however, have shortcomings that limit their efficacy in CC concept mapping, where the concepts for CC data are often represented by associative terms rather than by synonyms. Therefore, herein we propose a concept mapping scheme based on a two-phase matching approach, especially for application to Korean CCs, which uses term-level complete matching in the first phase and concept-level matching based on concept learning in the second phase. The proposed concept-level matching suggests the method to learn all the terms (associative terms as well as synonyms) that represent the concept and predict the most probable concept for a CC based on the learned terms. Experiments on 1204 CCs extracted from 15,618 discharge summaries of Korean EMRs showed that the proposed method gave significantly improved F-measure values compared to the baseline system, with improvements of up to 73.57%.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29685173','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29685173"><span>Dynamic characteristics of oxygen consumption.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Ye, Lin; Argha, Ahmadreza; Yu, Hairong; Celler, Branko G; Nguyen, Hung T; Su, Steven</p> <p>2018-04-23</p> <p>Previous studies have indicated that oxygen uptake ([Formula: see text]) is one of the most accurate indices for assessing the cardiorespiratory response to exercise. In most existing studies, the response of [Formula: see text] is often roughly modelled as a first-order system due to the inadequate stimulation and low signal to noise ratio. To overcome this difficulty, this paper proposes a novel nonparametric kernel-based method for the dynamic modelling of [Formula: see text] response to provide a more robust estimation. Twenty healthy non-athlete participants conducted treadmill exercises with monotonous stimulation (e.g., single step function as input). During the exercise, [Formula: see text] was measured and recorded by a popular portable gas analyser ([Formula: see text], COSMED). Based on the recorded data, a kernel-based estimation method was proposed to perform the nonparametric modelling of [Formula: see text]. For the proposed method, a properly selected kernel can represent the prior modelling information to reduce the dependence of comprehensive stimulations. Furthermore, due to the special elastic net formed by [Formula: see text] norm and kernelised [Formula: see text] norm, the estimations are smooth and concise. Additionally, the finite impulse response based nonparametric model which estimated by the proposed method can optimally select the order and fit better in terms of goodness-of-fit comparing to classical methods. Several kernels were introduced for the kernel-based [Formula: see text] modelling method. The results clearly indicated that the stable spline (SS) kernel has the best performance for [Formula: see text] modelling. Particularly, based on the experimental data from 20 participants, the estimated response from the proposed method with SS kernel was significantly better than the results from the benchmark method [i.e., prediction error method (PEM)] ([Formula: see text] vs [Formula: see text]). The proposed nonparametric modelling method is an effective method for the estimation of the impulse response of VO 2 -Speed system. Furthermore, the identified average nonparametric model method can dynamically predict [Formula: see text] response with acceptable accuracy during treadmill exercise.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014SPIE.9024E..06A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014SPIE.9024E..06A"><span>Symbolic feature detection for image understanding</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Aslan, Sinem; Akgül, Ceyhun Burak; Sankur, Bülent</p> <p>2014-03-01</p> <p>In this study we propose a model-driven codebook generation method used to assign probability scores to pixels in order to represent underlying local shapes they reside in. In the first version of the symbol library we limited ourselves to photometric and similarity transformations applied on eight prototypical shapes of flat plateau , ramp, valley, ridge, circular and elliptic respectively pit and hill and used randomized decision forest as the statistical classifier to compute shape class ambiguity of each pixel. We achieved90% accuracy in identification of known objects from alternate views, however, we could not outperform texture, global and local shape methods, but only color-based method in recognition of unknown objects. We present a progress plan to be accomplished as a future work to improve the proposed approach further.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/18048134','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/18048134"><span>Linear reduction method for predictive and informative tag SNP selection.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>He, Jingwu; Westbrooks, Kelly; Zelikovsky, Alexander</p> <p>2005-01-01</p> <p>Constructing a complete human haplotype map is helpful when associating complex diseases with their related SNPs. Unfortunately, the number of SNPs is very large and it is costly to sequence many individuals. Therefore, it is desirable to reduce the number of SNPs that should be sequenced to a small number of informative representatives called tag SNPs. In this paper, we propose a new linear algebra-based method for selecting and using tag SNPs. We measure the quality of our tag SNP selection algorithm by comparing actual SNPs with SNPs predicted from selected linearly independent tag SNPs. Our experiments show that for sufficiently long haplotypes, knowing only 0.4% of all SNPs the proposed linear reduction method predicts an unknown haplotype with the error rate below 2% based on 10% of the population.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018SPIE10608E..0JY','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018SPIE10608E..0JY"><span>Infrared images target detection based on background modeling in the discrete cosine domain</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Ye, Han; Pei, Jihong</p> <p>2018-02-01</p> <p>Background modeling is the critical technology to detect the moving target for video surveillance. Most background modeling techniques are aimed at land monitoring and operated in the spatial domain. A background establishment becomes difficult when the scene is a complex fluctuating sea surface. In this paper, the background stability and separability between target are analyzed deeply in the discrete cosine transform (DCT) domain, on this basis, we propose a background modeling method. The proposed method models each frequency point as a single Gaussian model to represent background, and the target is extracted by suppressing the background coefficients. Experimental results show that our approach can establish an accurate background model for seawater, and the detection results outperform other background modeling methods in the spatial domain.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/17946253','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/17946253"><span>Reconstructed phase spaces of intrinsic mode functions. Application to postural stability analysis.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Snoussi, Hichem; Amoud, Hassan; Doussot, Michel; Hewson, David; Duchêne, Jacques</p> <p>2006-01-01</p> <p>In this contribution, we propose an efficient nonlinear analysis method characterizing postural steadiness. The analyzed signal is the displacement of the centre of pressure (COP) collected from a force plate used for measuring postural sway. The proposed method consists of analyzing the nonlinear dynamics of the intrinsic mode functions (IMF) of the COP signal. The nonlinear properties are assessed through the reconstructed phase spaces of the different IMFs. This study shows some specific geometries of the attractors of some intrinsic modes. Moreover, the volume spanned by the geometric attractors in the reconstructed phase space represents an efficient indicator of the postural stability of the subject. Experiments results corroborate the effectiveness of the method to blindly discriminate young subjects, elderly subjects and subjects presenting a risk of falling.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018CompM.tmp..271A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018CompM.tmp..271A"><span>Meshfree truncated hierarchical refinement for isogeometric analysis</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Atri, H. R.; Shojaee, S.</p> <p>2018-05-01</p> <p>In this paper truncated hierarchical B-spline (THB-spline) is coupled with reproducing kernel particle method (RKPM) to blend advantages of the isogeometric analysis and meshfree methods. Since under certain conditions, the isogeometric B-spline and NURBS basis functions are exactly represented by reproducing kernel meshfree shape functions, recursive process of producing isogeometric bases can be omitted. More importantly, a seamless link between meshfree methods and isogeometric analysis can be easily defined which provide an authentic meshfree approach to refine the model locally in isogeometric analysis. This procedure can be accomplished using truncated hierarchical B-splines to construct new bases and adaptively refine them. It is also shown that the THB-RKPM method can provide efficient approximation schemes for numerical simulations and represent a promising performance in adaptive refinement of partial differential equations via isogeometric analysis. The proposed approach for adaptive locally refinement is presented in detail and its effectiveness is investigated through well-known benchmark examples.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2010SPIE.7626E..0HD','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2010SPIE.7626E..0HD"><span>Improved estimation of parametric images of cerebral glucose metabolic rate from dynamic FDG-PET using volume-wise principle component analysis</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Dai, Xiaoqian; Tian, Jie; Chen, Zhe</p> <p>2010-03-01</p> <p>Parametric images can represent both spatial distribution and quantification of the biological and physiological parameters of tracer kinetics. The linear least square (LLS) method is a well-estimated linear regression method for generating parametric images by fitting compartment models with good computational efficiency. However, bias exists in LLS-based parameter estimates, owing to the noise present in tissue time activity curves (TTACs) that propagates as correlated error in the LLS linearized equations. To address this problem, a volume-wise principal component analysis (PCA) based method is proposed. In this method, firstly dynamic PET data are properly pre-transformed to standardize noise variance as PCA is a data driven technique and can not itself separate signals from noise. Secondly, the volume-wise PCA is applied on PET data. The signals can be mostly represented by the first few principle components (PC) and the noise is left in the subsequent PCs. Then the noise-reduced data are obtained using the first few PCs by applying 'inverse PCA'. It should also be transformed back according to the pre-transformation method used in the first step to maintain the scale of the original data set. Finally, the obtained new data set is used to generate parametric images using the linear least squares (LLS) estimation method. Compared with other noise-removal method, the proposed method can achieve high statistical reliability in the generated parametric images. The effectiveness of the method is demonstrated both with computer simulation and with clinical dynamic FDG PET study.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017LaPhL..14e6201G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017LaPhL..14e6201G"><span>Spiking cortical model based non-local means method for despeckling multiframe optical coherence tomography data</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Gu, Yameng; Zhang, Xuming</p> <p>2017-05-01</p> <p>Optical coherence tomography (OCT) images are severely degraded by speckle noise. Existing methods for despeckling multiframe OCT data cannot deliver sufficient speckle suppression while preserving image details well. To address this problem, the spiking cortical model (SCM) based non-local means (NLM) method has been proposed in this letter. In the proposed method, the considered frame and two neighboring frames are input into three SCMs to generate the temporal series of pulse outputs. The normalized moment of inertia (NMI) of the considered patches in the pulse outputs is extracted to represent the rotational and scaling invariant features of the corresponding patches in each frame. The pixel similarity is computed based on the Euclidean distance between the NMI features and used as the weight. Each pixel in the considered frame is restored by the weighted averaging of all pixels in the pre-defined search window in the three frames. Experiments on the real multiframe OCT data of the pig eye demonstrate the advantage of the proposed method over the frame averaging method, the multiscale sparsity based tomographic denoising method, the wavelet-based method and the traditional NLM method in terms of visual inspection and objective metrics such as signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), equivalent number of looks (ENL) and cross-correlation (XCOR).</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20010000433','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20010000433"><span>An Arbitrary First Order Theory Can Be Represented by a Program: A Theorem</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Hosheleva, Olga</p> <p>1997-01-01</p> <p>How can we represent knowledge inside a computer? For formalized knowledge, classical logic seems to be the most adequate tool. Classical logic is behind all formalisms of classical mathematics, and behind many formalisms used in Artificial Intelligence. There is only one serious problem with classical logic: due to the famous Godel's theorem, classical logic is algorithmically undecidable; as a result, when the knowledge is represented in the form of logical statements, it is very difficult to check whether, based on this statement, a given query is true or not. To make knowledge representations more algorithmic, a special field of logic programming was invented. An important portion of logic programming is algorithmically decidable. To cover knowledge that cannot be represented in this portion, several extensions of the decidable fragments have been proposed. In the spirit of logic programming, these extensions are usually introduced in such a way that even if a general algorithm is not available, good heuristic methods exist. It is important to check whether the already proposed extensions are sufficient, or further extensions is necessary. In the present paper, we show that one particular extension, namely, logic programming with classical negation, introduced by M. Gelfond and V. Lifschitz, can represent (in some reasonable sense) an arbitrary first order logical theory.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018PhyA..496..286Z','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018PhyA..496..286Z"><span>Overlapping communities from dense disjoint and high total degree clusters</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Zhang, Hongli; Gao, Yang; Zhang, Yue</p> <p>2018-04-01</p> <p>Community plays an important role in the field of sociology, biology and especially in domains of computer science, where systems are often represented as networks. And community detection is of great importance in the domains. A community is a dense subgraph of the whole graph with more links between its members than between its members to the outside nodes, and nodes in the same community probably share common properties or play similar roles in the graph. Communities overlap when nodes in a graph belong to multiple communities. A vast variety of overlapping community detection methods have been proposed in the literature, and the local expansion method is one of the most successful techniques dealing with large networks. The paper presents a density-based seeding method, in which dense disjoint local clusters are searched and selected as seeds. The proposed method selects a seed by the total degree and density of local clusters utilizing merely local structures of the network. Furthermore, this paper proposes a novel community refining phase via minimizing the conductance of each community, through which the quality of identified communities is largely improved in linear time. Experimental results in synthetic networks show that the proposed seeding method outperforms other seeding methods in the state of the art and the proposed refining method largely enhances the quality of the identified communities. Experimental results in real graphs with ground-truth communities show that the proposed approach outperforms other state of the art overlapping community detection algorithms, in particular, it is more than two orders of magnitude faster than the existing global algorithms with higher quality, and it obtains much more accurate community structure than the current local algorithms without any priori information.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018JPhCS.971a2047I','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018JPhCS.971a2047I"><span>Semantic text relatedness on Al-Qur’an translation using modified path based method</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Irwanto, Yudi; Arif Bijaksana, Moch; Adiwijaya</p> <p>2018-03-01</p> <p>Abdul Baquee Muhammad [1] have built Corpus that contained AlQur’an domain, WordNet and dictionary. He has did initialisation in the development of knowledges about AlQur’an and the knowledges about relatedness between texts in AlQur’an. The Path based measurement method that proposed by Liu, Zhou and Zheng [3] has never been used in the AlQur’an domain. By using AlQur’an translation dataset in this research, the path based measurement method proposed by Liu, Zhou and Zheng [3] will be used to test this method in AlQur’an domain to obtain similarity values and to measure its correlation value. In this study the degree value is proposed to be used in modifying the path based method that proposed in previous research. Degree Value is the number of links that owned by a lcs (lowest common subsumer) node on a taxonomy. The links owned by a node on the taxonomy represent the semantic relationship that a node has in the taxonomy. By using degree value to modify the path-based method that proposed in previous research is expected that the correlation value obtained will increase. After running some experiment by using proposed method, the correlation measurement value can obtain fairly good correlation ties with 200 Word Pairs derive from Noun POS SimLex-999. The correlation value that be obtained is 93.3% which means their bonds are strong and they have very strong correlation. Whereas for the POS other than Noun POS vocabulary that owned by WordNet is incomplete therefore many pairs of words that the value of its similarity is zero so the correlation value is low.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19920010162','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19920010162"><span>Practical theories for service life prediction of critical aerospace structural components</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Ko, William L.; Monaghan, Richard C.; Jackson, Raymond H.</p> <p>1992-01-01</p> <p>A new second-order theory was developed for predicting the service lives of aerospace structural components. The predictions based on this new theory were compared with those based on the Ko first-order theory and the classical theory of service life predictions. The new theory gives very accurate service life predictions. An equivalent constant-amplitude stress cycle method was proposed for representing the random load spectrum for crack growth calculations. This method predicts the most conservative service life. The proposed use of minimum detectable crack size, instead of proof load established crack size as an initial crack size for crack growth calculations, could give a more realistic service life.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4452086','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4452086"><span>A Cross-Lingual Similarity Measure for Detecting Biomedical Term Translations</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Bollegala, Danushka; Kontonatsios, Georgios; Ananiadou, Sophia</p> <p>2015-01-01</p> <p>Bilingual dictionaries for technical terms such as biomedical terms are an important resource for machine translation systems as well as for humans who would like to understand a concept described in a foreign language. Often a biomedical term is first proposed in English and later it is manually translated to other languages. Despite the fact that there are large monolingual lexicons of biomedical terms, only a fraction of those term lexicons are translated to other languages. Manually compiling large-scale bilingual dictionaries for technical domains is a challenging task because it is difficult to find a sufficiently large number of bilingual experts. We propose a cross-lingual similarity measure for detecting most similar translation candidates for a biomedical term specified in one language (source) from another language (target). Specifically, a biomedical term in a language is represented using two types of features: (a) intrinsic features that consist of character n-grams extracted from the term under consideration, and (b) extrinsic features that consist of unigrams and bigrams extracted from the contextual windows surrounding the term under consideration. We propose a cross-lingual similarity measure using each of those feature types. First, to reduce the dimensionality of the feature space in each language, we propose prototype vector projection (PVP)—a non-negative lower-dimensional vector projection method. Second, we propose a method to learn a mapping between the feature spaces in the source and target language using partial least squares regression (PLSR). The proposed method requires only a small number of training instances to learn a cross-lingual similarity measure. The proposed PVP method outperforms popular dimensionality reduction methods such as the singular value decomposition (SVD) and non-negative matrix factorization (NMF) in a nearest neighbor prediction task. Moreover, our experimental results covering several language pairs such as English–French, English–Spanish, English–Greek, and English–Japanese show that the proposed method outperforms several other feature projection methods in biomedical term translation prediction tasks. PMID:26030738</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/20487532','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/20487532"><span>Conduct of a personal radiofrequency electromagnetic field measurement study: proposed study protocol.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Röösli, Martin; Frei, Patrizia; Bolte, John; Neubauer, Georg; Cardis, Elisabeth; Feychting, Maria; Gajsek, Peter; Heinrich, Sabine; Joseph, Wout; Mann, Simon; Martens, Luc; Mohler, Evelyn; Parslow, Roger C; Poulsen, Aslak Harbo; Radon, Katja; Schüz, Joachim; Thuroczy, György; Viel, Jean-François; Vrijheid, Martine</p> <p>2010-05-20</p> <p>The development of new wireless communication technologies that emit radio frequency electromagnetic fields (RF-EMF) is ongoing, but little is known about the RF-EMF exposure distribution in the general population. Previous attempts to measure personal exposure to RF-EMF have used different measurement protocols and analysis methods making comparisons between exposure situations across different study populations very difficult. As a result, observed differences in exposure levels between study populations may not reflect real exposure differences but may be in part, or wholly due to methodological differences. The aim of this paper is to develop a study protocol for future personal RF-EMF exposure studies based on experience drawn from previous research. Using the current knowledge base, we propose procedures for the measurement of personal exposure to RF-EMF, data collection, data management and analysis, and methods for the selection and instruction of study participants. We have identified two basic types of personal RF-EMF measurement studies: population surveys and microenvironmental measurements. In the case of a population survey, the unit of observation is the individual and a randomly selected representative sample of the population is needed to obtain reliable results. For microenvironmental measurements, study participants are selected in order to represent typical behaviours in different microenvironments. These two study types require different methods and procedures. Applying our proposed common core procedures in future personal measurement studies will allow direct comparisons of personal RF-EMF exposures in different populations and study areas.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3231206','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3231206"><span>Effective Fingerprint Quality Estimation for Diverse Capture Sensors</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Xie, Shan Juan; Yoon, Sook; Shin, Jinwook; Park, Dong Sun</p> <p>2010-01-01</p> <p>Recognizing the quality of fingerprints in advance can be beneficial for improving the performance of fingerprint recognition systems. The representative features to assess the quality of fingerprint images from different types of capture sensors are known to vary. In this paper, an effective quality estimation system that can be adapted for different types of capture sensors is designed by modifying and combining a set of features including orientation certainty, local orientation quality and consistency. The proposed system extracts basic features, and generates next level features which are applicable for various types of capture sensors. The system then uses the Support Vector Machine (SVM) classifier to determine whether or not an image should be accepted as input to the recognition system. The experimental results show that the proposed method can perform better than previous methods in terms of accuracy. In the meanwhile, the proposed method has an ability to eliminate residue images from the optical and capacitive sensors, and the coarse images from thermal sensors. PMID:22163632</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4699322','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4699322"><span>Theoretical foundation, methods, and criteria for calibrating human vibration models using frequency response functions</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Dong, Ren G.; Welcome, Daniel E.; McDowell, Thomas W.; Wu, John Z.</p> <p>2015-01-01</p> <p>While simulations of the measured biodynamic responses of the whole human body or body segments to vibration are conventionally interpreted as summaries of biodynamic measurements, and the resulting models are considered quantitative, this study looked at these simulations from a different angle: model calibration. The specific aims of this study are to review and clarify the theoretical basis for model calibration, to help formulate the criteria for calibration validation, and to help appropriately select and apply calibration methods. In addition to established vibration theory, a novel theorem of mechanical vibration is also used to enhance the understanding of the mathematical and physical principles of the calibration. Based on this enhanced understanding, a set of criteria was proposed and used to systematically examine the calibration methods. Besides theoretical analyses, a numerical testing method is also used in the examination. This study identified the basic requirements for each calibration method to obtain a unique calibration solution. This study also confirmed that the solution becomes more robust if more than sufficient calibration references are provided. Practically, however, as more references are used, more inconsistencies can arise among the measured data for representing the biodynamic properties. To help account for the relative reliabilities of the references, a baseline weighting scheme is proposed. The analyses suggest that the best choice of calibration method depends on the modeling purpose, the model structure, and the availability and reliability of representative reference data. PMID:26740726</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018SPIE10494E..59S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018SPIE10494E..59S"><span>Body surface detection method for photoacoustic image data using cloth-simulation technique</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Sekiguchi, H.; Yoshikawa, A.; Matsumoto, Y.; Asao, Y.; Yagi, T.; Togashi, K.; Toi, M.</p> <p>2018-02-01</p> <p>Photoacoustic tomography (PAT) is a novel modality that can visualize blood vessels without contrast agents. It clearly shows blood vessels near the body surface. However, these vessels obstruct the observation of deep blood vessels. As the existence range of each vessel is determined by the distance from the body surface, they can be separated if the position of the skin is known. However, skin tissue, which does not contain hemoglobin, does not appear in PAT results, therefore, manual estimation is required. As this task is very labor-intensive, its automation is highly desirable. Therefore, we developed a method to estimate the body surface using the cloth-simulation technique, which is a commonly used method to create computer graphics (CG) animations; however, it has not yet been employed for medical image processing. In cloth simulations, the virtual cloth is represented by a two-dimensional array of mass nodes. The nodes are connected with each other by springs. Once the cloth is released from a position away from the body, each node begins to move downwards under the effect of gravity, spring, and other forces; some of the nodes hit the superficial vessels and stop. The cloth position in the stationary state represents the body surface. The body surface estimation, which required approximately 1 h with the manual method, is automated and it takes only approximately 10 s with the proposed method. The proposed method could facilitate the practical use of PAT.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013SPIE.8791E..1NA','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013SPIE.8791E..1NA"><span>Optical-electronic system for express analysis of mineral raw materials dressability by color sorting method</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Alekhin, Artem A.; Gorbunova, Elena V.; Chertov, Aleksandr N.; Petuhova, Darya B.</p> <p>2013-04-01</p> <p>Due to the depletion of solid minerals ore reserves and the involvement in the production of the poor and refractory ores a process of continuous appreciation of minerals is going. In present time at the market of enrichment equipment are well represented optical sorters of various firms. All these sorters are essentially different from each other by parameters of productivity, classes of particles sizes for processed raw, nuances of decision algorithm, as well as by color model (RGB, YUV, HSB, etc.) chosen to describe the color of separating mineral samples. At the same time there is no dressability estimation method for mineral raw materials without direct semi-industrial test on the existing type of optical sorter, as well as there is no equipment realizing mentioned dressability estimation method. It should also be note the lack of criteria for choosing of one or another manufacturer (or type) of optical sorter. A direct consequence of this situation is the "opacity" of the color sorting method and the rejection of its potential customers. The proposed solution of mentioned problems is to develop the dressability estimation method, and to create an optical-electronic system for express analysis of mineral raw materials dressability by color sorting method. This paper has the description of structure organization and operating principles of experimental model optical-electronic system for express analysis of mineral raw material. Also in this work are represented comparison results of the proposed optical-electronic system and the real color sorter.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26912276','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26912276"><span>Efficient Analysis of Systems Biology Markup Language Models of Cellular Populations Using Arrays.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Watanabe, Leandro; Myers, Chris J</p> <p>2016-08-19</p> <p>The Systems Biology Markup Language (SBML) has been widely used for modeling biological systems. Although SBML has been successful in representing a wide variety of biochemical models, the core standard lacks the structure for representing large complex regular systems in a standard way, such as whole-cell and cellular population models. These models require a large number of variables to represent certain aspects of these types of models, such as the chromosome in the whole-cell model and the many identical cell models in a cellular population. While SBML core is not designed to handle these types of models efficiently, the proposed SBML arrays package can represent such regular structures more easily. However, in order to take full advantage of the package, analysis needs to be aware of the arrays structure. When expanding the array constructs within a model, some of the advantages of using arrays are lost. This paper describes a more efficient way to simulate arrayed models. To illustrate the proposed method, this paper uses a population of repressilator and genetic toggle switch circuits as examples. Results show that there are memory benefits using this approach with a modest cost in runtime.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li class="active"><span>15</span></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_15 --> <div id="page_16" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li class="active"><span>16</span></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="301"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28557607','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28557607"><span>A Poisson Log-Normal Model for Constructing Gene Covariation Network Using RNA-seq Data.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Choi, Yoonha; Coram, Marc; Peng, Jie; Tang, Hua</p> <p>2017-07-01</p> <p>Constructing expression networks using transcriptomic data is an effective approach for studying gene regulation. A popular approach for constructing such a network is based on the Gaussian graphical model (GGM), in which an edge between a pair of genes indicates that the expression levels of these two genes are conditionally dependent, given the expression levels of all other genes. However, GGMs are not appropriate for non-Gaussian data, such as those generated in RNA-seq experiments. We propose a novel statistical framework that maximizes a penalized likelihood, in which the observed count data follow a Poisson log-normal distribution. To overcome the computational challenges, we use Laplace's method to approximate the likelihood and its gradients, and apply the alternating directions method of multipliers to find the penalized maximum likelihood estimates. The proposed method is evaluated and compared with GGMs using both simulated and real RNA-seq data. The proposed method shows improved performance in detecting edges that represent covarying pairs of genes, particularly for edges connecting low-abundant genes and edges around regulatory hubs.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017MSSP...95..158Y','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017MSSP...95..158Y"><span>Discriminative non-negative matrix factorization (DNMF) and its application to the fault diagnosis of diesel engine</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Yang, Yong-sheng; Ming, An-bo; Zhang, You-yun; Zhu, Yong-sheng</p> <p>2017-10-01</p> <p>Diesel engines, widely used in engineering, are very important for the running of equipments and their fault diagnosis have attracted much attention. In the past several decades, the image based fault diagnosis methods have provided efficient ways for the diesel engine fault diagnosis. By introducing the class information into the traditional non-negative matrix factorization (NMF), an improved NMF algorithm named as discriminative NMF (DNMF) was developed and a novel imaged based fault diagnosis method was proposed by the combination of the DNMF and the KNN classifier. Experiments performed on the fault diagnosis of diesel engine were used to validate the efficacy of the proposed method. It is shown that the fault conditions of diesel engine can be efficiently classified by the proposed method using the coefficient matrix obtained by DNMF. Compared with the original NMF (ONMF) and principle component analysis (PCA), the DNMF can represent the class information more efficiently because the class characters of basis matrices obtained by the DNMF are more visible than those in the basis matrices obtained by the ONMF and PCA.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018SPIE10615E..0NZ','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018SPIE10615E..0NZ"><span>Face recognition via sparse representation of SIFT feature on hexagonal-sampling image</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Zhang, Daming; Zhang, Xueyong; Li, Lu; Liu, Huayong</p> <p>2018-04-01</p> <p>This paper investigates a face recognition approach based on Scale Invariant Feature Transform (SIFT) feature and sparse representation. The approach takes advantage of SIFT which is local feature other than holistic feature in classical Sparse Representation based Classification (SRC) algorithm and possesses strong robustness to expression, pose and illumination variations. Since hexagonal image has more inherit merits than square image to make recognition process more efficient, we extract SIFT keypoint in hexagonal-sampling image. Instead of matching SIFT feature, firstly the sparse representation of each SIFT keypoint is given according the constructed dictionary; secondly these sparse vectors are quantized according dictionary; finally each face image is represented by a histogram and these so-called Bag-of-Words vectors are classified by SVM. Due to use of local feature, the proposed method achieves better result even when the number of training sample is small. In the experiments, the proposed method gave higher face recognition rather than other methods in ORL and Yale B face databases; also, the effectiveness of the hexagonal-sampling in the proposed method is verified.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017JSV...399..308W','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017JSV...399..308W"><span>Fault diagnosis of sensor networked structures with multiple faults using a virtual beam based approach</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Wang, H.; Jing, X. J.</p> <p>2017-07-01</p> <p>This paper presents a virtual beam based approach suitable for conducting diagnosis of multiple faults in complex structures with limited prior knowledge of the faults involved. The "virtual beam", a recently-proposed concept for fault detection in complex structures, is applied, which consists of a chain of sensors representing a vibration energy transmission path embedded in the complex structure. Statistical tests and adaptive threshold are particularly adopted for fault detection due to limited prior knowledge of normal operational conditions and fault conditions. To isolate the multiple faults within a specific structure or substructure of a more complex one, a 'biased running' strategy is developed and embedded within the bacterial-based optimization method to construct effective virtual beams and thus to improve the accuracy of localization. The proposed method is easy and efficient to implement for multiple fault localization with limited prior knowledge of normal conditions and faults. With extensive experimental results, it is validated that the proposed method can localize both single fault and multiple faults more effectively than the classical trust index subtract on negative add on positive (TI-SNAP) method.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26357374','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26357374"><span>Drawing Road Networks with Mental Maps.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Lin, Shih-Syun; Lin, Chao-Hung; Hu, Yan-Jhang; Lee, Tong-Yee</p> <p>2014-09-01</p> <p>Tourist and destination maps are thematic maps designed to represent specific themes in maps. The road network topologies in these maps are generally more important than the geometric accuracy of roads. A road network warping method is proposed to facilitate map generation and improve theme representation in maps. The basic idea is deforming a road network to meet a user-specified mental map while an optimization process is performed to propagate distortions originating from road network warping. To generate a map, the proposed method includes algorithms for estimating road significance and for deforming a road network according to various geometric and aesthetic constraints. The proposed method can produce an iconic mark of a theme from a road network and meet a user-specified mental map. Therefore, the resulting map can serve as a tourist or destination map that not only provides visual aids for route planning and navigation tasks, but also visually emphasizes the presentation of a theme in a map for the purpose of advertising. In the experiments, the demonstrations of map generations show that our method enables map generation systems to generate deformed tourist and destination maps efficiently.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017TDR.....8..146L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017TDR.....8..146L"><span>A Simple and Automatic Method for Locating Surgical Guide Hole</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Li, Xun; Chen, Ming; Tang, Kai</p> <p>2017-12-01</p> <p>Restoration-driven surgical guides are widely used in implant surgery. This study aims to provide a simple and valid method of automatically locating surgical guide hole, which can reduce operator's experiences and improve the design efficiency and quality of surgical guide. Few literatures can be found on this topic and the paper proposed a novel and simple method to solve this problem. In this paper, a local coordinate system for each objective tooth is geometrically constructed in CAD system. This coordinate system well represents dental anatomical features and the center axis of the objective tooth (coincide with the corresponding guide hole axis) can be quickly evaluated in this coordinate system, finishing the location of the guide hole. The proposed method has been verified by comparing two types of benchmarks: manual operation by one skilled doctor with over 15-year experiences (used in most hospitals) and automatic way using one popular commercial package Simplant (used in few hospitals).Both the benchmarks and the proposed method are analyzed in their stress distribution when chewing and biting. The stress distribution is visually shown and plotted as a graph. The results show that the proposed method has much better stress distribution than the manual operation and slightly better than Simplant, which will significantly reduce the risk of cervical margin collapse and extend the wear life of the restoration.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=pHYSICS&pg=6&id=EJ1162200','ERIC'); return false;" href="https://eric.ed.gov/?q=pHYSICS&pg=6&id=EJ1162200"><span>Teaching the Nature of Physics through Art: A New Art of Teaching</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Colletti, Leonardo</p> <p>2018-01-01</p> <p>Science and art are traditionally represented as two disciplines with completely divergent goals, methods, and public. It has been claimed that, if rightly addressed, science and art education could mutually support each other. In this paper I propose the recurrent reference to certain famous paintings during the ordinary progress of physics…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.gpo.gov/fdsys/pkg/FR-2013-06-17/pdf/2013-14278.pdf','FEDREG'); return false;" href="https://www.gpo.gov/fdsys/pkg/FR-2013-06-17/pdf/2013-14278.pdf"><span>78 FR 36291 - Agency Information Collection Activities: Proposed Request and Comment Request</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.gpo.gov/fdsys/browse/collection.action?collectionCode=FR">Federal Register 2010, 2011, 2012, 2013, 2014</a></p> <p></p> <p>2013-06-17</p> <p>... of communication they want SSA to use when we send them benefit notices and other related... SSA to send notices and other communications in an alternative method besides the seven modalities we..., SSA conducts a face-to-face interview with the payee and completes Form SSA-624, Representative Payee...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=science+AND+research+AND+new+AND+ideas&pg=3&id=EJ1128683','ERIC'); return false;" href="https://eric.ed.gov/?q=science+AND+research+AND+new+AND+ideas&pg=3&id=EJ1128683"><span>Profiling of Participants in Chat Conversations Using Creativity-Based Heuristics</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Chiru, Costin-Gabriel; Rebedea, Traian</p> <p>2017-01-01</p> <p>This article proposes a new fully automated method for identifying creativity that is manifested in a divergent task. The task is represented by chat conversations in small groups, each group having to debate on the same topics, with the purpose of better understanding the discussed concepts. The chat conversations were created by undergraduate…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://cfpub.epa.gov/si/si_public_record_report.cfm?dirEntryId=308975&Lab=NERL&keyword=time+AND+travel&actType=&TIMSType=+&TIMSSubTypeID=&DEID=&epaNumber=&ntisID=&archiveStatus=Both&ombCat=Any&dateBeginCreated=&dateEndCreated=&dateBeginPublishedPresented=&dateEndPublishedPresented=&dateBeginUpdated=&dateEndUpdated=&dateBeginCompleted=&dateEndCompleted=&personID=&role=Any&journalID=&publisherID=&sortBy=revisionDate&count=50','EPA-EIMS'); return false;" href="https://cfpub.epa.gov/si/si_public_record_report.cfm?dirEntryId=308975&Lab=NERL&keyword=time+AND+travel&actType=&TIMSType=+&TIMSSubTypeID=&DEID=&epaNumber=&ntisID=&archiveStatus=Both&ombCat=Any&dateBeginCreated=&dateEndCreated=&dateBeginPublishedPresented=&dateEndPublishedPresented=&dateBeginUpdated=&dateEndUpdated=&dateBeginCompleted=&dateEndCompleted=&personID=&role=Any&journalID=&publisherID=&sortBy=revisionDate&count=50"><span>2D Time-lapse Seismic Tomography Using An Active Time Constraint (ATC) Approach</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://oaspub.epa.gov/eims/query.page">EPA Science Inventory</a></p> <p></p> <p></p> <p>We propose a 2D seismic time-lapse inversion approach to image the evolution of seismic velocities over time and space. The forward modeling is based on solving the eikonal equation using a second-order fast marching method. The wave-paths are represented by Fresnel volumes rathe...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.fs.usda.gov/treesearch/pubs/7415','TREESEARCH'); return false;" href="https://www.fs.usda.gov/treesearch/pubs/7415"><span>Bilinear modelling of cellulosic orthotropic nonlinear materials</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.fs.usda.gov/treesearch/">Treesearch</a></p> <p>E.P. Saliklis; T. J. Urbanik; B. Tokyay</p> <p>2003-01-01</p> <p>The proposed method of modelling orthotropic solids that have a nonlinear constitutive material relationship affords several advantages. The first advantage is the application of a simple bilinear stress-strain curve to represent the material response on two orthogonal axes as well as in shear, even for markedly nonlinear materials. The second advantage is that this...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://files.eric.ed.gov/fulltext/ED253204.pdf','ERIC'); return false;" href="http://files.eric.ed.gov/fulltext/ED253204.pdf"><span>Report on Approaches to Database Translation. Final Report.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Gallagher, Leonard; Salazar, Sandra</p> <p></p> <p>This report describes approaches to database translation (i.e., transferring data and data definitions from a source, either a database management system (DBMS) or a batch file, to a target DBMS), and recommends a method for representing the data structures of newly-proposed network and relational data models in a form suitable for database…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=amino+AND+acid&pg=2&id=EJ1087772','ERIC'); return false;" href="https://eric.ed.gov/?q=amino+AND+acid&pg=2&id=EJ1087772"><span>Computational Exploration of a Protein Receptor Binding Space with Student Proposed Peptide Ligands</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>King, Matthew D.; Phillips, Paul; Turner, Matthew W.; Katz, Michael; Lew, Sarah; Bradburn, Sarah; Andersen, Tim; McDougal, Owen M.</p> <p>2016-01-01</p> <p>Computational molecular docking is a fast and effective "in silico" method for the analysis of binding between a protein receptor model and a ligand. The visualization and manipulation of protein to ligand binding in three-dimensional space represents a powerful tool in the biochemistry curriculum to enhance student learning. The…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://cfpub.epa.gov/si/si_public_record_report.cfm?dirEntryId=202963&keyword=running&actType=&TIMSType=+&TIMSSubTypeID=&DEID=&epaNumber=&ntisID=&archiveStatus=Both&ombCat=Any&dateBeginCreated=&dateEndCreated=&dateBeginPublishedPresented=&dateEndPublishedPresented=&dateBeginUpdated=&dateEndUpdated=&dateBeginCompleted=&dateEndCompleted=&personID=&role=Any&journalID=&publisherID=&sortBy=revisionDate&count=50','EPA-EIMS'); return false;" href="https://cfpub.epa.gov/si/si_public_record_report.cfm?dirEntryId=202963&keyword=running&actType=&TIMSType=+&TIMSSubTypeID=&DEID=&epaNumber=&ntisID=&archiveStatus=Both&ombCat=Any&dateBeginCreated=&dateEndCreated=&dateBeginPublishedPresented=&dateEndPublishedPresented=&dateBeginUpdated=&dateEndUpdated=&dateBeginCompleted=&dateEndCompleted=&personID=&role=Any&journalID=&publisherID=&sortBy=revisionDate&count=50"><span>Development of rapid methods for measuring stream ecosystem functions in the Appalachian coal mining region: preliminary results</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://oaspub.epa.gov/eims/query.page">EPA Science Inventory</a></p> <p></p> <p></p> <p>Headwater streams represent the majority of U.S. stream miles. As a consequence of being abundant and widespread, the alteration and loss of headwater streams may have impacts on downstream waterbodies. These streams are frequently the subject of proposed dredge and fill projects...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=semantic+AND+web&pg=6&id=EJ912274','ERIC'); return false;" href="https://eric.ed.gov/?q=semantic+AND+web&pg=6&id=EJ912274"><span>Automatic Generation of Tests from Domain and Multimedia Ontologies</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Papasalouros, Andreas; Kotis, Konstantinos; Kanaris, Konstantinos</p> <p>2011-01-01</p> <p>The aim of this article is to present an approach for generating tests in an automatic way. Although other methods have been already reported in the literature, the proposed approach is based on ontologies, representing both domain and multimedia knowledge. The article also reports on a prototype implementation of this approach, which…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018WRCM...28..236B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018WRCM...28..236B"><span>Description of waves in inhomogeneous domains using Heun's equation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Bednarik, M.; Cervenka, M.</p> <p>2018-04-01</p> <p>There are a number of model equations describing electromagnetic, acoustic or quantum waves in inhomogeneous domains and some of them are of the same type from the mathematical point of view. This isomorphism enables us to use a unified approach to solving the corresponding equations. In this paper, the inhomogeneity is represented by a trigonometric spatial distribution of a parameter determining the properties of an inhomogeneous domain. From the point of view of modeling, this trigonometric parameter function can be smoothly connected to neighboring constant-parameter regions. For this type of distribution, exact local solutions of the model equations are represented by the local Heun functions. As the interval for which the solution is sought includes two regular singular points. For this reason, a method is proposed which resolves this problem only based on the local Heun functions. Further, the transfer matrix for the considered inhomogeneous domain is determined by means of the proposed method. As an example of the applicability of the presented solutions the transmission coefficient is calculated for the locally periodic structure which is given by an array of asymmetric barriers.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5104482','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5104482"><span>Inferring Weighted Directed Association Network from Multivariate Time Series with a Synthetic Method of Partial Symbolic Transfer Entropy Spectrum and Granger Causality</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Hu, Yanzhu; Ai, Xinbo</p> <p>2016-01-01</p> <p>Complex network methodology is very useful for complex system explorer. However, the relationships among variables in complex system are usually not clear. Therefore, inferring association networks among variables from their observed data has been a popular research topic. We propose a synthetic method, named small-shuffle partial symbolic transfer entropy spectrum (SSPSTES), for inferring association network from multivariate time series. The method synthesizes surrogate data, partial symbolic transfer entropy (PSTE) and Granger causality. A proper threshold selection is crucial for common correlation identification methods and it is not easy for users. The proposed method can not only identify the strong correlation without selecting a threshold but also has the ability of correlation quantification, direction identification and temporal relation identification. The method can be divided into three layers, i.e. data layer, model layer and network layer. In the model layer, the method identifies all the possible pair-wise correlation. In the network layer, we introduce a filter algorithm to remove the indirect weak correlation and retain strong correlation. Finally, we build a weighted adjacency matrix, the value of each entry representing the correlation level between pair-wise variables, and then get the weighted directed association network. Two numerical simulated data from linear system and nonlinear system are illustrated to show the steps and performance of the proposed approach. The ability of the proposed method is approved by an application finally. PMID:27832153</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018AdSpR..61..879T','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018AdSpR..61..879T"><span>Shaping low-thrust trajectories with thrust-handling feature</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Taheri, Ehsan; Kolmanovsky, Ilya; Atkins, Ella</p> <p>2018-02-01</p> <p>Shape-based methods are becoming popular in low-thrust trajectory optimization due to their fast computation speeds. In existing shape-based methods constraints are treated at the acceleration level but not at the thrust level. These two constraint types are not equivalent since spacecraft mass decreases over time as fuel is expended. This paper develops a shape-based method based on a Fourier series approximation that is capable of representing trajectories defined in spherical coordinates and that enforces thrust constraints. An objective function can be incorporated to minimize overall mission cost, i.e., achieve minimum ΔV . A representative mission from Earth to Mars is studied. The proposed Fourier series technique is demonstrated capable of generating feasible and near-optimal trajectories. These attributes can facilitate future low-thrust mission designs where different trajectory alternatives must be rapidly constructed and evaluated.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24212035','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24212035"><span>Novel and efficient tag SNPs selection algorithms.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Chen, Wen-Pei; Hung, Che-Lun; Tsai, Suh-Jen Jane; Lin, Yaw-Ling</p> <p>2014-01-01</p> <p>SNPs are the most abundant forms of genetic variations amongst species; the association studies between complex diseases and SNPs or haplotypes have received great attention. However, these studies are restricted by the cost of genotyping all SNPs; thus, it is necessary to find smaller subsets, or tag SNPs, representing the rest of the SNPs. In fact, the existing tag SNP selection algorithms are notoriously time-consuming. An efficient algorithm for tag SNP selection was presented, which was applied to analyze the HapMap YRI data. The experimental results show that the proposed algorithm can achieve better performance than the existing tag SNP selection algorithms; in most cases, this proposed algorithm is at least ten times faster than the existing methods. In many cases, when the redundant ratio of the block is high, the proposed algorithm can even be thousands times faster than the previously known methods. Tools and web services for haplotype block analysis integrated by hadoop MapReduce framework are also developed using the proposed algorithm as computation kernels.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28952956','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28952956"><span>Discriminative Structured Dictionary Learning on Grassmann Manifolds and Its Application on Image Restoration.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Pan, Han; Jing, Zhongliang; Qiao, Lingfeng; Li, Minzhe</p> <p>2017-09-25</p> <p>Image restoration is a difficult and challenging problem in various imaging applications. However, despite of the benefits of a single overcomplete dictionary, there are still several challenges for capturing the geometric structure of image of interest. To more accurately represent the local structures of the underlying signals, we propose a new problem formulation for sparse representation with block-orthogonal constraint. There are three contributions. First, a framework for discriminative structured dictionary learning is proposed, which leads to a smooth manifold structure and quotient search spaces. Second, an alternating minimization scheme is proposed after taking both the cost function and the constraints into account. This is achieved by iteratively alternating between updating the block structure of the dictionary defined on Grassmann manifold and sparsifying the dictionary atoms automatically. Third, Riemannian conjugate gradient is considered to track local subspaces efficiently with a convergence guarantee. Extensive experiments on various datasets demonstrate that the proposed method outperforms the state-of-the-art methods on the removal of mixed Gaussian-impulse noise.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li class="active"><span>16</span></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_16 --> <div id="page_17" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li class="active"><span>17</span></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="321"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25331027','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25331027"><span>A framework for the social valuation of ecosystem services.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Felipe-Lucia, María R; Comín, Francisco A; Escalera-Reyes, Javier</p> <p>2015-05-01</p> <p>Methods to assess ecosystem services using ecological or economic approaches are considerably better defined than methods for the social approach. To identify why the social approach remains unclear, we reviewed current trends in the literature. We found two main reasons: (i) the cultural ecosystem services are usually used to represent the whole social approach, and (ii) the economic valuation based on social preferences is typically included in the social approach. Next, we proposed a framework for the social valuation of ecosystem services that provides alternatives to economics methods, enables comparison across studies, and supports decision-making in land planning and management. The framework includes the agreements emerged from the review, such as considering spatial-temporal flows, including stakeholders from all social ranges, and using two complementary methods to value ecosystem services. Finally, we provided practical recommendations learned from the application of the proposed framework in a case study.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014JChPh.141p4109O','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014JChPh.141p4109O"><span>Exact stochastic unraveling of an optical coherence dynamics by cumulant expansion</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Olšina, Jan; Kramer, Tobias; Kreisbeck, Christoph; Mančal, Tomáš</p> <p>2014-10-01</p> <p>A numerically exact Monte Carlo scheme for calculation of open quantum system dynamics is proposed and implemented. The method consists of a Monte Carlo summation of a perturbation expansion in terms of trajectories in Liouville phase-space with respect to the coupling between the excited states of the molecule. The trajectories are weighted by a complex decoherence factor based on the second-order cumulant expansion of the environmental evolution. The method can be used with an arbitrary environment characterized by a general correlation function and arbitrary coupling strength. It is formally exact for harmonic environments, and it can be used with arbitrary temperature. Time evolution of an optically excited Frenkel exciton dimer representing a molecular exciton interacting with a charge transfer state is calculated by the proposed method. We calculate the evolution of the optical coherence elements of the density matrix and linear absorption spectrum, and compare them with the predictions of standard simulation methods.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017ISPAr42W4..111K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017ISPAr42W4..111K"><span>Automatic Centerline Extraction of Coverd Roads by Surrounding Objects from High Resolution Satellite Images</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kamangir, H.; Momeni, M.; Satari, M.</p> <p>2017-09-01</p> <p>This paper presents an automatic method to extract road centerline networks from high and very high resolution satellite images. The present paper addresses the automated extraction roads covered with multiple natural and artificial objects such as trees, vehicles and either shadows of buildings or trees. In order to have a precise road extraction, this method implements three stages including: classification of images based on maximum likelihood algorithm to categorize images into interested classes, modification process on classified images by connected component and morphological operators to extract pixels of desired objects by removing undesirable pixels of each class, and finally line extraction based on RANSAC algorithm. In order to evaluate performance of the proposed method, the generated results are compared with ground truth road map as a reference. The evaluation performance of the proposed method using representative test images show completeness values ranging between 77% and 93%.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2011IJE....98.1305Z','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2011IJE....98.1305Z"><span>Uncertain decision tree inductive inference</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Zarban, L.; Jafari, S.; Fakhrahmad, S. M.</p> <p>2011-10-01</p> <p>Induction is the process of reasoning in which general rules are formulated based on limited observations of recurring phenomenal patterns. Decision tree learning is one of the most widely used and practical inductive methods, which represents the results in a tree scheme. Various decision tree algorithms have already been proposed such as CLS, ID3, Assistant C4.5, REPTree and Random Tree. These algorithms suffer from some major shortcomings. In this article, after discussing the main limitations of the existing methods, we introduce a new decision tree induction algorithm, which overcomes all the problems existing in its counterparts. The new method uses bit strings and maintains important information on them. This use of bit strings and logical operation on them causes high speed during the induction process. Therefore, it has several important features: it deals with inconsistencies in data, avoids overfitting and handles uncertainty. We also illustrate more advantages and the new features of the proposed method. The experimental results show the effectiveness of the method in comparison with other methods existing in the literature.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=1560804','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=1560804"><span>A Method for Automated Detection of Usability Problems from Client User Interface Events</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Saadawi, Gilan M.; Legowski, Elizabeth; Medvedeva, Olga; Chavan, Girish; Crowley, Rebecca S.</p> <p>2005-01-01</p> <p>Think-aloud usability analysis provides extremely useful data but is very time-consuming and expensive to perform because of the extensive manual video analysis that is required. We describe a simple method for automated detection of usability problems from client user interface events for a developing medical intelligent tutoring system. The method incorporates (1) an agent-based method for communication that funnels all interface events and system responses to a centralized database, (2) a simple schema for representing interface events and higher order subgoals, and (3) an algorithm that reproduces the criteria used for manual coding of usability problems. A correction factor was empirically determining to account for the slower task performance of users when thinking aloud. We tested the validity of the method by simultaneously identifying usability problems using TAU and manually computing them from stored interface event data using the proposed algorithm. All usability problems that did not rely on verbal utterances were detectable with the proposed method. PMID:16779121</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26079493','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26079493"><span>Interpolation of orientation distribution functions in diffusion weighted imaging using multi-tensor model.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Afzali, Maryam; Fatemizadeh, Emad; Soltanian-Zadeh, Hamid</p> <p>2015-09-30</p> <p>Diffusion weighted imaging (DWI) is a non-invasive method for investigating the brain white matter structure and can be used to evaluate fiber bundles. However, due to practical constraints, DWI data acquired in clinics are low resolution. This paper proposes a method for interpolation of orientation distribution functions (ODFs). To this end, fuzzy clustering is applied to segment ODFs based on the principal diffusion directions (PDDs). Next, a cluster is modeled by a tensor so that an ODF is represented by a mixture of tensors. For interpolation, each tensor is rotated separately. The method is applied on the synthetic and real DWI data of control and epileptic subjects. Both experiments illustrate capability of the method in increasing spatial resolution of the data in the ODF field properly. The real dataset show that the method is capable of reliable identification of differences between temporal lobe epilepsy (TLE) patients and normal subjects. The method is compared to existing methods. Comparison studies show that the proposed method generates smaller angular errors relative to the existing methods. Another advantage of the method is that it does not require an iterative algorithm to find the tensors. The proposed method is appropriate for increasing resolution in the ODF field and can be applied to clinical data to improve evaluation of white matter fibers in the brain. Copyright © 2015 Elsevier B.V. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26120277','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26120277"><span>A Method to Represent Heterogeneous Materials for Rapid Prototyping: The Matryoshka Approach.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Lei, Shuangyan; Frank, Matthew C; Anderson, Donald D; Brown, Thomas D</p> <p></p> <p>The purpose of this paper is to present a new method for representing heterogeneous materials using nested STL shells, based, in particular, on the density distributions of human bones. Nested STL shells, called Matryoshka models, are described, based on their namesake Russian nesting dolls. In this approach, polygonal models, such as STL shells, are "stacked" inside one another to represent different material regions. The Matryoshka model addresses the challenge of representing different densities and different types of bone when reverse engineering from medical images. The Matryoshka model is generated via an iterative process of thresholding the Hounsfield Unit (HU) data using computed tomography (CT), thereby delineating regions of progressively increasing bone density. These nested shells can represent regions starting with the medullary (bone marrow) canal, up through and including the outer surface of the bone. The Matryoshka approach introduced can be used to generate accurate models of heterogeneous materials in an automated fashion, avoiding the challenge of hand-creating an assembly model for input to multi-material additive or subtractive manufacturing. This paper presents a new method for describing heterogeneous materials: in this case, the density distribution in a human bone. The authors show how the Matryoshka model can be used to plan harvesting locations for creating custom rapid allograft bone implants from donor bone. An implementation of a proposed harvesting method is demonstrated, followed by a case study using subtractive rapid prototyping to harvest a bone implant from a human tibia surrogate.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AJ....153..181S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AJ....153..181S"><span>The Proposal Auto-Categorizer and Manager for Time Allocation Review at the Space Telescope Science Institute</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Strolger, Louis-Gregory; Porter, Sophia; Lagerstrom, Jill; Weissman, Sarah; Reid, I. Neill; Garcia, Michael</p> <p>2017-04-01</p> <p>The Proposal Auto-Categorizer and Manager (PACMan) tool was written to respond to concerns about subjective flaws and potential biases in some aspects of the proposal review process for time allocation for the Hubble Space Telescope (HST), and to partially alleviate some of the anticipated additional workload from the James Webb Space Telescope (JWST) proposal review. PACMan is essentially a mixed-method Naive Bayesian spam filtering routine, with multiple pools representing scientific categories, that utilizes the Robinson method for combining token (or word) probabilities. PACMan was trained to make similar programmatic decisions in science category sorting, panelist selection, and proposal-to-panelists assignments to those made by individuals and committees in the Science Policies Group (SPG) at the Space Telescope Science Institute. Based on training from the previous cycle’s proposals, at an average of 87%, PACMan made the same science category assignments for proposals in Cycle 24 as the SPG. Tests for similar science categorizations, based on training using proposals from additional cycles, show that this accuracy can be further improved, to the > 95 % level. This tool will be used to augment or replace key functions in the Time Allocation Committee review processes in future HST and JWST cycles.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23706527','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23706527"><span>A new wavelet transform to sparsely represent cortical current densities for EEG/MEG inverse problems.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Liao, Ke; Zhu, Min; Ding, Lei</p> <p>2013-08-01</p> <p>The present study investigated the use of transform sparseness of cortical current density on human brain surface to improve electroencephalography/magnetoencephalography (EEG/MEG) inverse solutions. Transform sparseness was assessed by evaluating compressibility of cortical current densities in transform domains. To do that, a structure compression method from computer graphics was first adopted to compress cortical surface structure, either regular or irregular, into hierarchical multi-resolution meshes. Then, a new face-based wavelet method based on generated multi-resolution meshes was proposed to compress current density functions defined on cortical surfaces. Twelve cortical surface models were built by three EEG/MEG softwares and their structural compressibility was evaluated and compared by the proposed method. Monte Carlo simulations were implemented to evaluate the performance of the proposed wavelet method in compressing various cortical current density distributions as compared to other two available vertex-based wavelet methods. The present results indicate that the face-based wavelet method can achieve higher transform sparseness than vertex-based wavelet methods. Furthermore, basis functions from the face-based wavelet method have lower coherence against typical EEG and MEG measurement systems than vertex-based wavelet methods. Both high transform sparseness and low coherent measurements suggest that the proposed face-based wavelet method can improve the performance of L1-norm regularized EEG/MEG inverse solutions, which was further demonstrated in simulations and experimental setups using MEG data. Thus, this new transform on complicated cortical structure is promising to significantly advance EEG/MEG inverse source imaging technologies. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19900007562','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19900007562"><span>A finite element-boundary integral formulation for scattering by three-dimensional cavity-backed apertures</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Jin, Jian-Ming; Volakis, John L.</p> <p>1990-01-01</p> <p>A numerical technique is proposed for the electromagnetic characterization of the scattering by a three-dimensional cavity-backed aperture in an infinite ground plane. The technique combines the finite element and boundary integral methods to formulate a system of equations for the solution of the aperture fields and those inside the cavity. Specifically, the finite element method is employed to formulate the fields in the cavity region and the boundary integral approach is used in conjunction with the equivalence principle to represent the fields above the ground plane. Unlike traditional approaches, the proposed technique does not require knowledge of the cavity's Green's function and is, therefore, applicable to arbitrary shape depressions and material fillings. Furthermore, the proposed formulation leads to a system having a partly full and partly sparse as well as symmetric and banded matrix which can be solved efficiently using special algorithms.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/19163564','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/19163564"><span>Inhomogeneity compensation for MR brain image segmentation using a multi-stage FCM-based approach.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Szilágyi, László; Szilágyi, Sándor M; Dávid, László; Benyó, Zoltán</p> <p>2008-01-01</p> <p>Intensity inhomogeneity or intensity non-uniformity (INU) is an undesired phenomenon that represents the main obstacle for MR image segmentation and registration methods. Various techniques have been proposed to eliminate or compensate the INU, most of which are embedded into clustering algorithms. This paper proposes a multiple stage fuzzy c-means (FCM) based algorithm for the estimation and compensation of the slowly varying additive or multiplicative noise, supported by a pre-filtering technique for Gaussian and impulse noise elimination. The slowly varying behavior of the bias or gain field is assured by a smoothening filter that performs a context dependent averaging, based on a morphological criterion. The experiments using 2-D synthetic phantoms and real MR images show, that the proposed method provides accurate segmentation. The produced segmentation and fuzzy membership values can serve as excellent support for 3-D registration and segmentation techniques.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018JCoPh.361..377Z','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018JCoPh.361..377Z"><span>An approach for maximizing the smallest eigenfrequency of structure vibration based on piecewise constant level set method</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Zhang, Zhengfang; Chen, Weifeng</p> <p>2018-05-01</p> <p>Maximization of the smallest eigenfrequency of the linearized elasticity system with area constraint is investigated. The elasticity system is extended into a large background domain, but the void is vacuum and not filled with ersatz material. The piecewise constant level set (PCLS) method is applied to present two regions, the original material region and the void region. A quadratic PCLS function is proposed to represent the characteristic function. Consequently, the functional derivative of the smallest eigenfrequency with respect to PCLS function takes nonzero value in the original material region and zero in the void region. A penalty gradient algorithm is proposed, which initializes the whole background domain with the original material and decreases the area of original material region till the area constraint is satisfied. 2D and 3D numerical examples are presented, illustrating the validity of the proposed algorithm.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017SPIE10134E..3FL','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017SPIE10134E..3FL"><span>Computer-assisted quantification of the skull deformity for craniosynostosis from 3D head CT images using morphological descriptor and hierarchical classification</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Lee, Min Jin; Hong, Helen; Shim, Kyu Won; Kim, Yong Oock</p> <p>2017-03-01</p> <p>This paper proposes morphological descriptors representing the degree of skull deformity for craniosynostosis in head CT images and a hierarchical classifier model distinguishing among normal and different types of craniosynostosis. First, to compare deformity surface model with mean normal surface model, mean normal surface models are generated for each age range and the mean normal surface model is deformed to the deformity surface model via multi-level threestage registration. Second, four shape features including local distance and area ratio indices are extracted in each five cranial bone. Finally, hierarchical SVM classifier is proposed to distinguish between the normal and deformity. As a result, the proposed method showed improved classification results compared to traditional cranial index. Our method can be used for the early diagnosis, surgical planning and postsurgical assessment of craniosynostosis as well as quantitative analysis of skull deformity.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015MS%26E..103a2004S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015MS%26E..103a2004S"><span>Analysis of the influence of advanced materials for aerospace products R&D and manufacturing cost</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Shen, A. W.; Guo, J. L.; Wang, Z. J.</p> <p>2015-12-01</p> <p>In this paper, we pointed out the deficiency of traditional cost estimation model about aerospace products Research & Development (R&D) and manufacturing based on analyzing the widely use of advanced materials in aviation products. Then we put up with the estimating formulas of cost factor, which representing the influences of advanced materials on the labor cost rate and manufacturing materials cost rate. The values ranges of the common advanced materials such as composite materials, titanium alloy are present in the labor and materials two aspects. Finally, we estimate the R&D and manufacturing cost of F/A-18, F/A- 22, B-1B and B-2 aircraft based on the common DAPCA IV model and the modified model proposed by this paper. The calculation results show that the calculation precision improved greatly by the proposed method which considering advanced materials. So we can know the proposed method is scientific and reasonable.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/17431296','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/17431296"><span>Matching by linear programming and successive convexification.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Jiang, Hao; Drew, Mark S; Li, Ze-Nian</p> <p>2007-06-01</p> <p>We present a novel convex programming scheme to solve matching problems, focusing on the challenging problem of matching in a large search range and with cluttered background. Matching is formulated as metric labeling with L1 regularization terms, for which we propose a novel linear programming relaxation method and an efficient successive convexification implementation. The unique feature of the proposed relaxation scheme is that a much smaller set of basis labels is used to represent the original label space. This greatly reduces the size of the searching space. A successive convexification scheme solves the labeling problem in a coarse to fine manner. Importantly, the original cost function is reconvexified at each stage, in the new focus region only, and the focus region is updated so as to refine the searching result. This makes the method well-suited for large label set matching. Experiments demonstrate successful applications of the proposed matching scheme in object detection, motion estimation, and tracking.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/1253986','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/1253986"><span>Active Learning Framework for Non-Intrusive Load Monitoring: Preprint</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Jin, Xin</p> <p>2016-05-16</p> <p>Non-Intrusive Load Monitoring (NILM) is a set of techniques that estimate the electricity usage of individual appliances from power measurements taken at a limited number of locations in a building. One of the key challenges in NILM is having too much data without class labels yet being unable to label the data manually for cost or time constraints. This paper presents an active learning framework that helps existing NILM techniques to overcome this challenge. Active learning is an advanced machine learning method that interactively queries a user for the class label information. Unlike most existing NILM systems that heuristically requestmore » user inputs, the proposed method only needs minimally sufficient information from a user to build a compact and yet highly representative load signature library. Initial results indicate the proposed method can reduce the user inputs by up to 90% while still achieving similar disaggregation performance compared to a heuristic method. Thus, the proposed method can substantially reduce the burden on the user, improve the performance of a NILM system with limited user inputs, and overcome the key market barriers to the wide adoption of NILM technologies.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014AcSpA.129..542B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014AcSpA.129..542B"><span>Interference correction by extracting the information of interference dominant regions: Application to near-infrared spectra</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Bi, Yiming; Tang, Liang; Shan, Peng; Xie, Qiong; Hu, Yong; Peng, Silong; Tan, Jie; Li, Changwen</p> <p>2014-08-01</p> <p>Interference such as baseline drift and light scattering can degrade the model predictability in multivariate analysis of near-infrared (NIR) spectra. Usually interference can be represented by an additive and a multiplicative factor. In order to eliminate these interferences, correction parameters are needed to be estimated from spectra. However, the spectra are often mixed of physical light scattering effects and chemical light absorbance effects, making it difficult for parameter estimation. Herein, a novel algorithm was proposed to find a spectral region automatically that the interesting chemical absorbance and noise are low, that is, finding an interference dominant region (IDR). Based on the definition of IDR, a two-step method was proposed to find the optimal IDR and the corresponding correction parameters estimated from IDR. Finally, the correction was performed to the full spectral range using previously obtained parameters for the calibration set and test set, respectively. The method can be applied to multi target systems with one IDR suitable for all targeted analytes. Tested on two benchmark data sets of near-infrared spectra, the performance of the proposed method provided considerable improvement compared with full spectral estimation methods and comparable with other state-of-art methods.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/21859616','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/21859616"><span>Bag-of-features based medical image retrieval via multiple assignment and visual words weighting.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Wang, Jingyan; Li, Yongping; Zhang, Ying; Wang, Chao; Xie, Honglan; Chen, Guoling; Gao, Xin</p> <p>2011-11-01</p> <p>Bag-of-features based approaches have become prominent for image retrieval and image classification tasks in the past decade. Such methods represent an image as a collection of local features, such as image patches and key points with scale invariant feature transform (SIFT) descriptors. To improve the bag-of-features methods, we first model the assignments of local descriptors as contribution functions, and then propose a novel multiple assignment strategy. Assuming the local features can be reconstructed by their neighboring visual words in a vocabulary, reconstruction weights can be solved by quadratic programming. The weights are then used to build contribution functions, resulting in a novel assignment method, called quadratic programming (QP) assignment. We further propose a novel visual word weighting method. The discriminative power of each visual word is analyzed by the sub-similarity function in the bin that corresponds to the visual word. Each sub-similarity function is then treated as a weak classifier. A strong classifier is learned by boosting methods that combine those weak classifiers. The weighting factors of the visual words are learned accordingly. We evaluate the proposed methods on medical image retrieval tasks. The methods are tested on three well-known data sets, i.e., the ImageCLEFmed data set, the 304 CT Set, and the basal-cell carcinoma image set. Experimental results demonstrate that the proposed QP assignment outperforms the traditional nearest neighbor assignment, the multiple assignment, and the soft assignment, whereas the proposed boosting based weighting strategy outperforms the state-of-the-art weighting methods, such as the term frequency weights and the term frequency-inverse document frequency weights.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24873889','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24873889"><span>A comparative study of progressive versus successive spectrophotometric resolution techniques applied for pharmaceutical ternary mixtures.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Saleh, Sarah S; Lotfy, Hayam M; Hassan, Nagiba Y; Salem, Hesham</p> <p>2014-11-11</p> <p>This work represents a comparative study of a novel progressive spectrophotometric resolution technique namely, amplitude center method (ACM), versus the well-established successive spectrophotometric resolution techniques namely; successive derivative subtraction (SDS); successive derivative of ratio spectra (SDR) and mean centering of ratio spectra (MCR). All the proposed spectrophotometric techniques consist of several consecutive steps utilizing ratio and/or derivative spectra. The novel amplitude center method (ACM) can be used for the determination of ternary mixtures using single divisor where the concentrations of the components are determined through progressive manipulation performed on the same ratio spectrum. Those methods were applied for the analysis of the ternary mixture of chloramphenicol (CHL), dexamethasone sodium phosphate (DXM) and tetryzoline hydrochloride (TZH) in eye drops in the presence of benzalkonium chloride as a preservative. The proposed methods were checked using laboratory-prepared mixtures and were successfully applied for the analysis of pharmaceutical formulation containing the cited drugs. The proposed methods were validated according to the ICH guidelines. A comparative study was conducted between those methods regarding simplicity, limitation and sensitivity. The obtained results were statistically compared with those obtained from the official BP methods, showing no significant difference with respect to accuracy and precision. Copyright © 2014 Elsevier B.V. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20140016386','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20140016386"><span>Modeling Quasi-Static and Fatigue-Driven Delamination Migration</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>De Carvalho, N. V.; Ratcliffe, J. G.; Chen, B. Y.; Pinho, S. T.; Baiz, P. M.; Tay, T. E.</p> <p>2014-01-01</p> <p>An approach was proposed and assessed for the high-fidelity modeling of progressive damage and failure in composite materials. It combines the Floating Node Method (FNM) and the Virtual Crack Closure Technique (VCCT) to represent multiple interacting failure mechanisms in a mesh-independent fashion. Delamination, matrix cracking, and migration were captured failure and migration criteria based on fracture mechanics. Quasi-static and fatigue loading were modeled within the same overall framework. The methodology proposed was illustrated by simulating the delamination migration test, showing good agreement with the available experimental data.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li class="active"><span>17</span></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_17 --> <div id="page_18" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li class="active"><span>18</span></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="341"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013ISPAr.XL4c..63H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013ISPAr.XL4c..63H"><span>Key Spatial Relations-based Focused Crawling (KSRs-FC) for Borderlands Situation Analysis</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Hou, D. Y.; Wu, H.; Chen, J.; Li, R.</p> <p>2013-11-01</p> <p>Place names play an important role in Borderlands Situation topics, while current focused crawling methods treat them in the same way as other common keywords, which may lead to the omission of many useful web pages. In the paper, place names in web pages and their spatial relations were firstly discussed. Then, a focused crawling method named KSRs-FC was proposed to deal with the collection of situation information about borderlands. In this method, place names and common keywords were represented separately, and some of the spatial relations related to web pages crawling were used in the relevance calculation between the given topic and web pages. Furthermore, an information collection system for borderlands situation analysis was developed based on KSRs-FC. Finally, F-Score method was adopted to quantitatively evaluate this method by comparing with traditional method. Experimental results showed that the F-Score value of the proposed method increased by 11% compared to traditional method with the same sample data. Obviously, KSRs-FC method can effectively reduce the misjudgement of relevant webpages.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28679235','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28679235"><span>Identifying key nodes in multilayer networks based on tensor decomposition.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Wang, Dingjie; Wang, Haitao; Zou, Xiufen</p> <p>2017-06-01</p> <p>The identification of essential agents in multilayer networks characterized by different types of interactions is a crucial and challenging topic, one that is essential for understanding the topological structure and dynamic processes of multilayer networks. In this paper, we use the fourth-order tensor to represent multilayer networks and propose a novel method to identify essential nodes based on CANDECOMP/PARAFAC (CP) tensor decomposition, referred to as the EDCPTD centrality. This method is based on the perspective of multilayer networked structures, which integrate the information of edges among nodes and links between different layers to quantify the importance of nodes in multilayer networks. Three real-world multilayer biological networks are used to evaluate the performance of the EDCPTD centrality. The bar chart and ROC curves of these multilayer networks indicate that the proposed approach is a good alternative index to identify real important nodes. Meanwhile, by comparing the behavior of both the proposed method and the aggregated single-layer methods, we demonstrate that neglecting the multiple relationships between nodes may lead to incorrect identification of the most versatile nodes. Furthermore, the Gene Ontology functional annotation demonstrates that the identified top nodes based on the proposed approach play a significant role in many vital biological processes. Finally, we have implemented many centrality methods of multilayer networks (including our method and the published methods) and created a visual software based on the MATLAB GUI, called ENMNFinder, which can be used by other researchers.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017Chaos..27f3108W','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017Chaos..27f3108W"><span>Identifying key nodes in multilayer networks based on tensor decomposition</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Wang, Dingjie; Wang, Haitao; Zou, Xiufen</p> <p>2017-06-01</p> <p>The identification of essential agents in multilayer networks characterized by different types of interactions is a crucial and challenging topic, one that is essential for understanding the topological structure and dynamic processes of multilayer networks. In this paper, we use the fourth-order tensor to represent multilayer networks and propose a novel method to identify essential nodes based on CANDECOMP/PARAFAC (CP) tensor decomposition, referred to as the EDCPTD centrality. This method is based on the perspective of multilayer networked structures, which integrate the information of edges among nodes and links between different layers to quantify the importance of nodes in multilayer networks. Three real-world multilayer biological networks are used to evaluate the performance of the EDCPTD centrality. The bar chart and ROC curves of these multilayer networks indicate that the proposed approach is a good alternative index to identify real important nodes. Meanwhile, by comparing the behavior of both the proposed method and the aggregated single-layer methods, we demonstrate that neglecting the multiple relationships between nodes may lead to incorrect identification of the most versatile nodes. Furthermore, the Gene Ontology functional annotation demonstrates that the identified top nodes based on the proposed approach play a significant role in many vital biological processes. Finally, we have implemented many centrality methods of multilayer networks (including our method and the published methods) and created a visual software based on the MATLAB GUI, called ENMNFinder, which can be used by other researchers.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015VSD....53.1620Y','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015VSD....53.1620Y"><span>A method for improved accuracy in three dimensions for determining wheel/rail contact points</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Yang, Xinwen; Gu, Shaojie; Zhou, Shunhua; Zhou, Yu; Lian, Songliang</p> <p>2015-11-01</p> <p>Searching for the contact points between wheels and rails is important because these points represent the points of exerted contact forces. In order to obtain an accurate contact point and an in-depth description of the wheel/rail contact behaviours on a curved track or in a turnout, a method with improved accuracy in three dimensions is proposed to determine the contact points and the contact patches between the wheel and the rail when considering the effect of the yaw angle and the roll angle on the motion of the wheel set. The proposed method, with no need of the curve fitting of the wheel and rail profiles, can accurately, directly, and comprehensively determine the contact interface distances between the wheel and the rail. The range iteration algorithm is used to improve the computation efficiency and reduce the calculation required. The present computation method is applied for the analysis of the contact of rails of CHINA (CHN) 75 kg/m and wheel sets of wearing type tread of China's freight cars. In addition, it can be proved that the results of the proposed method are consistent with that of Kalker's program CONTACT, and the maximum deviation from the wheel/rail contact patch area of this two methods is approximately 5%. The proposed method, can also be used to investigate static wheel/rail contact. Some wheel/rail contact points and contact patch distributions are discussed and assessed, wheel and rail non-worn and worn profiles included.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/ADA566412','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/ADA566412"><span>F18 EF5 PET/CT Imaging in Patients with Brain Metastases from Breast Cancer</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.dtic.mil/">DTIC Science & Technology</a></p> <p></p> <p>2012-07-01</p> <p>been demonstrated to improve local control and survival in select patients after WBRT . At present we do not have any method of determining a priori...relapse after WBRT would represent a significant step forward in the management of patients with brain metastases from breast cancer. We propose to...use a noninvasive imaging method to detect residual tumor hypoxia in patients receiving WBRT . Body: Task 1. To estimate the degree of hypoxia</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018JPhCS.967a2014S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018JPhCS.967a2014S"><span>Quantitative data standardization of X-ray based densitometry methods</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Sergunova, K. A.; Petraikin, A. V.; Petrjajkin, F. A.; Akhmad, K. S.; Semenov, D. S.; Potrakhov, N. N.</p> <p>2018-02-01</p> <p>In the present work is proposed the design of special liquid phantom for assessing the accuracy of quantitative densitometric data. Also are represented the dependencies between the measured bone mineral density values and the given values for different X-ray based densitometry techniques. Shown linear graphs make it possible to introduce correction factors to increase the accuracy of BMD measurement by QCT, DXA and DECT methods, and to use them for standardization and comparison of measurements.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018PhyA..494..288L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018PhyA..494..288L"><span>Modeling influenza-like illnesses through composite compartmental models</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Levy, Nir; , Michael, Iv; Yom-Tov, Elad</p> <p>2018-03-01</p> <p>Epidemiological models for the spread of pathogens in a population are usually only able to describe a single pathogen. This makes their application unrealistic in cases where multiple pathogens with similar symptoms are spreading concurrently within the same population. Here we describe a method which makes possible the application of multiple single-strain models under minimal conditions. As such, our method provides a bridge between theoretical models of epidemiology and data-driven approaches for modeling of influenza and other similar viruses. Our model extends the Susceptible-Infected-Recovered model to higher dimensions, allowing the modeling of a population infected by multiple viruses. We further provide a method, based on an overcomplete dictionary of feasible realizations of SIR solutions, to blindly partition the time series representing the number of infected people in a population into individual components, each representing the effect of a single pathogen. We demonstrate the applicability of our proposed method on five years of seasonal influenza-like illness (ILI) rates, estimated from Twitter data. We demonstrate that our method describes, on average, 44% of the variance in the ILI time series. The individual infectious components derived from our model are matched to known viral profiles in the populations, which we demonstrate matches that of independently collected epidemiological data. We further show that the basic reproductive numbers (R 0) of the matched components are in range known for these pathogens. Our results suggest that the proposed method can be applied to other pathogens and geographies, providing a simple method for estimating the parameters of epidemics in a population.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/20580744','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/20580744"><span>Statistical methods to estimate treatment effects from multichannel electroencephalography (EEG) data in clinical trials.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Ma, Junshui; Wang, Shubing; Raubertas, Richard; Svetnik, Vladimir</p> <p>2010-07-15</p> <p>With the increasing popularity of using electroencephalography (EEG) to reveal the treatment effect in drug development clinical trials, the vast volume and complex nature of EEG data compose an intriguing, but challenging, topic. In this paper the statistical analysis methods recommended by the EEG community, along with methods frequently used in the published literature, are first reviewed. A straightforward adjustment of the existing methods to handle multichannel EEG data is then introduced. In addition, based on the spatial smoothness property of EEG data, a new category of statistical methods is proposed. The new methods use a linear combination of low-degree spherical harmonic (SPHARM) basis functions to represent a spatially smoothed version of the EEG data on the scalp, which is close to a sphere in shape. In total, seven statistical methods, including both the existing and the newly proposed methods, are applied to two clinical datasets to compare their power to detect a drug effect. Contrary to the EEG community's recommendation, our results suggest that (1) the nonparametric method does not outperform its parametric counterpart; and (2) including baseline data in the analysis does not always improve the statistical power. In addition, our results recommend that (3) simple paired statistical tests should be avoided due to their poor power; and (4) the proposed spatially smoothed methods perform better than their unsmoothed versions. Copyright 2010 Elsevier B.V. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/16104010','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/16104010"><span>Functional magnetic resonance imaging activation detection: fuzzy cluster analysis in wavelet and multiwavelet domains.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Jahanian, Hesamoddin; Soltanian-Zadeh, Hamid; Hossein-Zadeh, Gholam-Ali</p> <p>2005-09-01</p> <p>To present novel feature spaces, based on multiscale decompositions obtained by scalar wavelet and multiwavelet transforms, to remedy problems associated with high dimension of functional magnetic resonance imaging (fMRI) time series (when they are used directly in clustering algorithms) and their poor signal-to-noise ratio (SNR) that limits accurate classification of fMRI time series according to their activation contents. Using randomization, the proposed method finds wavelet/multiwavelet coefficients that represent the activation content of fMRI time series and combines them to define new feature spaces. Using simulated and experimental fMRI data sets, the proposed feature spaces are compared to the cross-correlation (CC) feature space and their performances are evaluated. In these studies, the false positive detection rate is controlled using randomization. To compare different methods, several points of the receiver operating characteristics (ROC) curves, using simulated data, are estimated and compared. The proposed features suppress the effects of confounding signals and improve activation detection sensitivity. Experimental results show improved sensitivity and robustness of the proposed method compared to the conventional CC analysis. More accurate and sensitive activation detection can be achieved using the proposed feature spaces compared to CC feature space. Multiwavelet features show superior detection sensitivity compared to the scalar wavelet features. (c) 2005 Wiley-Liss, Inc.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/21316517-simplification-time-dependent-generalized-self-interaction-correction-method-using-two-sets-orbitals-application-optimized-effective-potential-formalism','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/21316517-simplification-time-dependent-generalized-self-interaction-correction-method-using-two-sets-orbitals-application-optimized-effective-potential-formalism"><span>Simplification of the time-dependent generalized self-interaction correction method using two sets of orbitals: Application of the optimized effective potential formalism</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Messud, J.; Dinh, P. M.; Suraud, Eric</p> <p>2009-10-15</p> <p>We propose a simplification of the time-dependent self-interaction correction (TD-SIC) method using two sets of orbitals, applying the optimized effective potential (OEP) method. The resulting scheme is called time-dependent 'generalized SIC-OEP'. A straightforward approximation, using the spatial localization of one set of orbitals, leads to the 'generalized SIC-Slater' formalism. We show that it represents a great improvement compared to the traditional SIC-Slater and Krieger-Li-Iafrate formalisms.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2009PhRvA..80d4503M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2009PhRvA..80d4503M"><span>Simplification of the time-dependent generalized self-interaction correction method using two sets of orbitals: Application of the optimized effective potential formalism</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Messud, J.; Dinh, P. M.; Reinhard, P.-G.; Suraud, Eric</p> <p>2009-10-01</p> <p>We propose a simplification of the time-dependent self-interaction correction (TD-SIC) method using two sets of orbitals, applying the optimized effective potential (OEP) method. The resulting scheme is called time-dependent “generalized SIC-OEP.” A straightforward approximation, using the spatial localization of one set of orbitals, leads to the “generalized SIC-Slater” formalism. We show that it represents a great improvement compared to the traditional SIC-Slater and Krieger-Li-Iafrate formalisms.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19880033903&hterms=Automatic+control&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D20%26Ntt%3D%253FAutomatic%2Bcontrol%253F','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19880033903&hterms=Automatic+control&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D20%26Ntt%3D%253FAutomatic%2Bcontrol%253F"><span>An innovative exercise method to simulate orbital EVA work - Applications to PLSS automatic controls</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Lantz, Renee; Vykukal, H.; Webbon, Bruce</p> <p>1987-01-01</p> <p>An exercise method has been proposed which may satisfy the current need for a laboratory simulation representative of muscular, cardiovascular, respiratory, and thermoregulatory responses to work during orbital extravehicular activity (EVA). The simulation incorporates arm crank ergometry with a unique body support mechanism that allows all body position stabilization forces to be reacted at the feet. By instituting this exercise method in laboratory experimentation, an advanced portable life support system (PLSS) thermoregulatory control system can be designed to more accurately reflect the specific work requirements of orbital EVA.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018JPhCS.991a2070S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018JPhCS.991a2070S"><span>Method of mechanical quadratures for solving singular integral equations of various types</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Sahakyan, A. V.; Amirjanyan, H. A.</p> <p>2018-04-01</p> <p>The method of mechanical quadratures is proposed as a common approach intended for solving the integral equations defined on finite intervals and containing Cauchy-type singular integrals. This method can be used to solve singular integral equations of the first and second kind, equations with generalized kernel, weakly singular equations, and integro-differential equations. The quadrature rules for several different integrals represented through the same coefficients are presented. This allows one to reduce the integral equations containing integrals of different types to a system of linear algebraic equations.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2004ITEIS.124.1626W','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2004ITEIS.124.1626W"><span>Active Solution Space and Search on Job-shop Scheduling Problem</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Watanabe, Masato; Ida, Kenichi; Gen, Mitsuo</p> <p></p> <p>In this paper we propose a new searching method of Genetic Algorithm for Job-shop scheduling problem (JSP). The coding method that represent job number in order to decide a priority to arrange a job to Gannt Chart (called the ordinal representation with a priority) in JSP, an active schedule is created by using left shift. We define an active solution at first. It is solution which can create an active schedule without using left shift, and set of its defined an active solution space. Next, we propose an algorithm named Genetic Algorithm with active solution space search (GA-asol) which can create an active solution while solution is evaluated, in order to search the active solution space effectively. We applied it for some benchmark problems to compare with other method. The experimental results show good performance.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26699060','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26699060"><span>Generalized weighted ratio method for accurate turbidity measurement over a wide range.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Liu, Hongbo; Yang, Ping; Song, Hong; Guo, Yilu; Zhan, Shuyue; Huang, Hui; Wang, Hangzhou; Tao, Bangyi; Mu, Quanquan; Xu, Jing; Li, Dejun; Chen, Ying</p> <p>2015-12-14</p> <p>Turbidity measurement is important for water quality assessment, food safety, medicine, ocean monitoring, etc. In this paper, a method that accurately estimates the turbidity over a wide range is proposed, where the turbidity of the sample is represented as a weighted ratio of the scattered light intensities at a series of angles. An improvement in the accuracy is achieved by expanding the structure of the ratio function, thus adding more flexibility to the turbidity-intensity fitting. Experiments have been carried out with an 850 nm laser and a power meter fixed on a turntable to measure the light intensity at different angles. The results show that the relative estimation error of the proposed method is 0.58% on average for a four-angle intensity combination for all test samples with a turbidity ranging from 160 NTU to 4000 NTU.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015JChPh.143m4103X','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015JChPh.143m4103X"><span>Multiresolution persistent homology for excessively large biomolecular datasets</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Xia, Kelin; Zhao, Zhixiong; Wei, Guo-Wei</p> <p>2015-10-01</p> <p>Although persistent homology has emerged as a promising tool for the topological simplification of complex data, it is computationally intractable for large datasets. We introduce multiresolution persistent homology to handle excessively large datasets. We match the resolution with the scale of interest so as to represent large scale datasets with appropriate resolution. We utilize flexibility-rigidity index to access the topological connectivity of the data set and define a rigidity density for the filtration analysis. By appropriately tuning the resolution of the rigidity density, we are able to focus the topological lens on the scale of interest. The proposed multiresolution topological analysis is validated by a hexagonal fractal image which has three distinct scales. We further demonstrate the proposed method for extracting topological fingerprints from DNA molecules. In particular, the topological persistence of a virus capsid with 273 780 atoms is successfully analyzed which would otherwise be inaccessible to the normal point cloud method and unreliable by using coarse-grained multiscale persistent homology. The proposed method has also been successfully applied to the protein domain classification, which is the first time that persistent homology is used for practical protein domain analysis, to our knowledge. The proposed multiresolution topological method has potential applications in arbitrary data sets, such as social networks, biological networks, and graphs.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/22489667-multiresolution-persistent-homology-excessively-large-biomolecular-datasets','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/22489667-multiresolution-persistent-homology-excessively-large-biomolecular-datasets"><span></span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Xia, Kelin; Zhao, Zhixiong; Wei, Guo-Wei, E-mail: wei@math.msu.edu</p> <p></p> <p>Although persistent homology has emerged as a promising tool for the topological simplification of complex data, it is computationally intractable for large datasets. We introduce multiresolution persistent homology to handle excessively large datasets. We match the resolution with the scale of interest so as to represent large scale datasets with appropriate resolution. We utilize flexibility-rigidity index to access the topological connectivity of the data set and define a rigidity density for the filtration analysis. By appropriately tuning the resolution of the rigidity density, we are able to focus the topological lens on the scale of interest. The proposed multiresolution topologicalmore » analysis is validated by a hexagonal fractal image which has three distinct scales. We further demonstrate the proposed method for extracting topological fingerprints from DNA molecules. In particular, the topological persistence of a virus capsid with 273 780 atoms is successfully analyzed which would otherwise be inaccessible to the normal point cloud method and unreliable by using coarse-grained multiscale persistent homology. The proposed method has also been successfully applied to the protein domain classification, which is the first time that persistent homology is used for practical protein domain analysis, to our knowledge. The proposed multiresolution topological method has potential applications in arbitrary data sets, such as social networks, biological networks, and graphs.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25494494','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25494494"><span>Sparse representation of electrodermal activity with knowledge-driven dictionaries.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Chaspari, Theodora; Tsiartas, Andreas; Stein, Leah I; Cermak, Sharon A; Narayanan, Shrikanth S</p> <p>2015-03-01</p> <p>Biometric sensors and portable devices are being increasingly embedded into our everyday life, creating the need for robust physiological models that efficiently represent, analyze, and interpret the acquired signals. We propose a knowledge-driven method to represent electrodermal activity (EDA), a psychophysiological signal linked to stress, affect, and cognitive processing. We build EDA-specific dictionaries that accurately model both the slow varying tonic part and the signal fluctuations, called skin conductance responses (SCR), and use greedy sparse representation techniques to decompose the signal into a small number of atoms from the dictionary. Quantitative evaluation of our method considers signal reconstruction, compression rate, and information retrieval measures, that capture the ability of the model to incorporate the main signal characteristics, such as SCR occurrences. Compared to previous studies fitting a predetermined structure to the signal, results indicate that our approach provides benefits across all aforementioned criteria. This paper demonstrates the ability of appropriate dictionaries along with sparse decomposition methods to reliably represent EDA signals and provides a foundation for automatic measurement of SCR characteristics and the extraction of meaningful EDA features.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28894463','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28894463"><span>Joint Extraction of Entities and Relations Using Reinforcement Learning and Deep Learning.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Feng, Yuntian; Zhang, Hongjun; Hao, Wenning; Chen, Gang</p> <p>2017-01-01</p> <p>We use both reinforcement learning and deep learning to simultaneously extract entities and relations from unstructured texts. For reinforcement learning, we model the task as a two-step decision process. Deep learning is used to automatically capture the most important information from unstructured texts, which represent the state in the decision process. By designing the reward function per step, our proposed method can pass the information of entity extraction to relation extraction and obtain feedback in order to extract entities and relations simultaneously. Firstly, we use bidirectional LSTM to model the context information, which realizes preliminary entity extraction. On the basis of the extraction results, attention based method can represent the sentences that include target entity pair to generate the initial state in the decision process. Then we use Tree-LSTM to represent relation mentions to generate the transition state in the decision process. Finally, we employ Q -Learning algorithm to get control policy π in the two-step decision process. Experiments on ACE2005 demonstrate that our method attains better performance than the state-of-the-art method and gets a 2.4% increase in recall-score.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5574273','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5574273"><span>Joint Extraction of Entities and Relations Using Reinforcement Learning and Deep Learning</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Zhang, Hongjun; Chen, Gang</p> <p>2017-01-01</p> <p>We use both reinforcement learning and deep learning to simultaneously extract entities and relations from unstructured texts. For reinforcement learning, we model the task as a two-step decision process. Deep learning is used to automatically capture the most important information from unstructured texts, which represent the state in the decision process. By designing the reward function per step, our proposed method can pass the information of entity extraction to relation extraction and obtain feedback in order to extract entities and relations simultaneously. Firstly, we use bidirectional LSTM to model the context information, which realizes preliminary entity extraction. On the basis of the extraction results, attention based method can represent the sentences that include target entity pair to generate the initial state in the decision process. Then we use Tree-LSTM to represent relation mentions to generate the transition state in the decision process. Finally, we employ Q-Learning algorithm to get control policy π in the two-step decision process. Experiments on ACE2005 demonstrate that our method attains better performance than the state-of-the-art method and gets a 2.4% increase in recall-score. PMID:28894463</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li class="active"><span>18</span></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_18 --> <div id="page_19" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li class="active"><span>19</span></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="361"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017PhDT........18H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017PhDT........18H"><span>Analytical Approaches for Identification and Representation of Critical Protection Systems in Transient Stability Studies</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Hedman, Mojdeh Khorsand</p> <p></p> <p>After a major disturbance, the power system response is highly dependent on protection schemes and system dynamics. Improving power systems situational awareness requires proper and simultaneous modeling of both protection schemes and dynamic characteristics in power systems analysis tools. Historical information and ex-post analysis of blackouts reaffirm the critical role of protective devices in cascading events, thereby confirming the necessity to represent protective functions in transient stability studies. This dissertation is aimed at studying the importance of representing protective relays in power system dynamic studies. Although modeling all of the protective relays within transient stability studies may result in a better estimation of system behavior, representing, updating, and maintaining the protection system data becomes an insurmountable task. Inappropriate or outdated representation of the relays may result in incorrect assessment of the system behavior. This dissertation presents a systematic method to determine essential relays to be modeled in transient stability studies. The desired approach should identify protective relays that are critical for various operating conditions and contingencies. The results of the transient stability studies confirm that modeling only the identified critical protective relays is sufficient to capture system behavior for various operating conditions and precludes the need to model all of the protective relays. Moreover, this dissertation proposes a method that can be implemented to determine the appropriate location of out-of-step blocking relays. During unstable power swings, a generator or group of generators may accelerate or decelerate leading to voltage depression at the electrical center along with generator tripping. This voltage depression may cause protective relay mis-operation and unintentional separation of the system. In order to avoid unintentional islanding, the potentially mis-operating relays should be blocked from tripping with the use of out-of-step blocking schemes. Blocking these mis-operating relays, combined with an appropriate islanding scheme, help avoid a system wide collapse. The proposed method is tested on data from the Western Electricity Coordinating Council. A triple line outage of the California-Oregon Intertie is studied. The results show that the proposed method is able to successfully identify proper locations of out-of-step blocking scheme.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29345009','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29345009"><span>Choosing non-redundant representative subsets of protein sequence data sets using submodular optimization.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Libbrecht, Maxwell W; Bilmes, Jeffrey A; Noble, William Stafford</p> <p>2018-04-01</p> <p>Selecting a non-redundant representative subset of sequences is a common step in many bioinformatics workflows, such as the creation of non-redundant training sets for sequence and structural models or selection of "operational taxonomic units" from metagenomics data. Previous methods for this task, such as CD-HIT, PISCES, and UCLUST, apply a heuristic threshold-based algorithm that has no theoretical guarantees. We propose a new approach based on submodular optimization. Submodular optimization, a discrete analogue to continuous convex optimization, has been used with great success for other representative set selection problems. We demonstrate that the submodular optimization approach results in representative protein sequence subsets with greater structural diversity than sets chosen by existing methods, using as a gold standard the SCOPe library of protein domain structures. In this setting, submodular optimization consistently yields protein sequence subsets that include more SCOPe domain families than sets of the same size selected by competing approaches. We also show how the optimization framework allows us to design a mixture objective function that performs well for both large and small representative sets. The framework we describe is the best possible in polynomial time (under some assumptions), and it is flexible and intuitive because it applies a suite of generic methods to optimize one of a variety of objective functions. © 2018 Wiley Periodicals, Inc.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017JPhCS.795a2039B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017JPhCS.795a2039B"><span>Genetics algorithm optimization of DWT-DCT based image Watermarking</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Budiman, Gelar; Novamizanti, Ledya; Iwut, Iwan</p> <p>2017-01-01</p> <p>Data hiding in an image content is mandatory for setting the ownership of the image. Two dimensions discrete wavelet transform (DWT) and discrete cosine transform (DCT) are proposed as transform method in this paper. First, the host image in RGB color space is converted to selected color space. We also can select the layer where the watermark is embedded. Next, 2D-DWT transforms the selected layer obtaining 4 subband. We select only one subband. And then block-based 2D-DCT transforms the selected subband. Binary-based watermark is embedded on the AC coefficients of each block after zigzag movement and range based pixel selection. Delta parameter replacing pixels in each range represents embedded bit. +Delta represents bit “1” and -delta represents bit “0”. Several parameters to be optimized by Genetics Algorithm (GA) are selected color space, layer, selected subband of DWT decomposition, block size, embedding range, and delta. The result of simulation performs that GA is able to determine the exact parameters obtaining optimum imperceptibility and robustness, in any watermarked image condition, either it is not attacked or attacked. DWT process in DCT based image watermarking optimized by GA has improved the performance of image watermarking. By five attacks: JPEG 50%, resize 50%, histogram equalization, salt-pepper and additive noise with variance 0.01, robustness in the proposed method has reached perfect watermark quality with BER=0. And the watermarked image quality by PSNR parameter is also increased about 5 dB than the watermarked image quality from previous method.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29370728','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29370728"><span>Feature Extraction with GMDH-Type Neural Networks for EEG-Based Person Identification.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Schetinin, Vitaly; Jakaite, Livija; Nyah, Ndifreke; Novakovic, Dusica; Krzanowski, Wojtek</p> <p>2018-08-01</p> <p>The brain activity observed on EEG electrodes is influenced by volume conduction and functional connectivity of a person performing a task. When the task is a biometric test the EEG signals represent the unique "brain print", which is defined by the functional connectivity that is represented by the interactions between electrodes, whilst the conduction components cause trivial correlations. Orthogonalization using autoregressive modeling minimizes the conduction components, and then the residuals are related to features correlated with the functional connectivity. However, the orthogonalization can be unreliable for high-dimensional EEG data. We have found that the dimensionality can be significantly reduced if the baselines required for estimating the residuals can be modeled by using relevant electrodes. In our approach, the required models are learnt by a Group Method of Data Handling (GMDH) algorithm which we have made capable of discovering reliable models from multidimensional EEG data. In our experiments on the EEG-MMI benchmark data which include 109 participants, the proposed method has correctly identified all the subjects and provided a statistically significant ([Formula: see text]) improvement of the identification accuracy. The experiments have shown that the proposed GMDH method can learn new features from multi-electrode EEG data, which are capable to improve the accuracy of biometric identification.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018TDR.....9..155M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018TDR.....9..155M"><span>Salient Point Detection in Protrusion Parts of 3D Object Robust to Isometric Variations</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Mirloo, Mahsa; Ebrahimnezhad, Hosein</p> <p>2018-03-01</p> <p>In this paper, a novel method is proposed to detect 3D object salient points robust to isometric variations and stable against scaling and noise. Salient points can be used as the representative points from object protrusion parts in order to improve the object matching and retrieval algorithms. The proposed algorithm is started by determining the first salient point of the model based on the average geodesic distance of several random points. Then, according to the previous salient point, a new point is added to this set of points in each iteration. By adding every salient point, decision function is updated. Hence, a condition is created for selecting the next point in which the iterative point is not extracted from the same protrusion part so that drawing out of a representative point from every protrusion part is guaranteed. This method is stable against model variations with isometric transformations, scaling, and noise with different levels of strength due to using a feature robust to isometric variations and considering the relation between the salient points. In addition, the number of points used in averaging process is decreased in this method, which leads to lower computational complexity in comparison with the other salient point detection algorithms.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017JEI....26e1402C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017JEI....26e1402C"><span>Extracting foreground ensemble features to detect abnormal crowd behavior in intelligent video-surveillance systems</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Chan, Yi-Tung; Wang, Shuenn-Jyi; Tsai, Chung-Hsien</p> <p>2017-09-01</p> <p>Public safety is a matter of national security and people's livelihoods. In recent years, intelligent video-surveillance systems have become important active-protection systems. A surveillance system that provides early detection and threat assessment could protect people from crowd-related disasters and ensure public safety. Image processing is commonly used to extract features, e.g., people, from a surveillance video. However, little research has been conducted on the relationship between foreground detection and feature extraction. Most current video-surveillance research has been developed for restricted environments, in which the extracted features are limited by having information from a single foreground; they do not effectively represent the diversity of crowd behavior. This paper presents a general framework based on extracting ensemble features from the foreground of a surveillance video to analyze a crowd. The proposed method can flexibly integrate different foreground-detection technologies to adapt to various monitored environments. Furthermore, the extractable representative features depend on the heterogeneous foreground data. Finally, a classification algorithm is applied to these features to automatically model crowd behavior and distinguish an abnormal event from normal patterns. The experimental results demonstrate that the proposed method's performance is both comparable to that of state-of-the-art methods and satisfies the requirements of real-time applications.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016AcSpA.154..243K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016AcSpA.154..243K"><span>A flow injection chemiluminescence method for determination of nalidixic acid based on KMnO4-morin sensitized with CdS quantum dots</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Khataee, Alireza; Lotfi, Roya; Hasanzadeh, Aliyeh; Iranifam, Mortaza; Joo, Sang Woo</p> <p>2016-02-01</p> <p>A simple and sensitive flow injection chemiluminescence (CL) method was developed for determination of nalidixic acid by application of CdS quantum dots (QDs) in KMnO4-morin CL system in acidic medium. Optical and structural features of L-cysteine capped CdS quantum dots which were synthesized via hydrothermal approach were investigated using X-ray diffraction (XRD), scanning electron microscopy (SEM), photoluminescence (PL), and ultraviolet-visible (UV-Vis) spectroscopy. Moreover, the potential mechanism of the proposed CL method was described using the results of the kinetic curves of CL systems, the spectra of CL, PL and UV-Vis analyses. The CL intensity of the KMnO4-morin-CdS QDs system was considerably increased in the presence of nalidixic acid. Under the optimum condition, the enhanced CL intensity was linearly proportional to the concentration of nalidixic acid in the range of 0.0013 to 21.0 mg L- 1, with a detection limit of (3σ) 0.003 mg L- 1. Also, the proposed CL method was utilized for determination of nalidixic acid in environmental water samples, and commercial pharmaceutical formulation to approve its applicability. Furthermore, corona discharge ionization ion mobility spectrometry (CD-IMS) method was utilized for determination of nalidixic acid and the results of real sample analysis by two proposed methods were compared. Comparison the analytical features of these methods represented that the proposed CL method is preferable to CD-IMS method for determination of nalidixic acid due to its high sensitivity and precision.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26534888','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26534888"><span>A flow injection chemiluminescence method for determination of nalidixic acid based on KMnO₄-morin sensitized with CdS quantum dots.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Khataee, Alireza; Lotfi, Roya; Hasanzadeh, Aliyeh; Iranifam, Mortaza; Joo, Sang Woo</p> <p>2016-02-05</p> <p>A simple and sensitive flow injection chemiluminescence (CL) method was developed for determination of nalidixic acid by application of CdS quantum dots (QDs) in KMnO4-morin CL system in acidic medium. Optical and structural features of L-cysteine capped CdS quantum dots which were synthesized via hydrothermal approach were investigated using X-ray diffraction (XRD), scanning electron microscopy (SEM), photoluminescence (PL), and ultraviolet-visible (UV-Vis) spectroscopy. Moreover, the potential mechanism of the proposed CL method was described using the results of the kinetic curves of CL systems, the spectra of CL, PL and UV-Vis analyses. The CL intensity of the KMnO4-morin-CdS QDs system was considerably increased in the presence of nalidixic acid. Under the optimum condition, the enhanced CL intensity was linearly proportional to the concentration of nalidixic acid in the range of 0.0013 to 21.0 mg L(-1), with a detection limit of (3σ) 0.003 mg L(-1). Also, the proposed CL method was utilized for determination of nalidixic acid in environmental water samples, and commercial pharmaceutical formulation to approve its applicability. Furthermore, corona discharge ionization ion mobility spectrometry (CD-IMS) method was utilized for determination of nalidixic acid and the results of real sample analysis by two proposed methods were compared. Comparison the analytical features of these methods represented that the proposed CL method is preferable to CD-IMS method for determination of nalidixic acid due to its high sensitivity and precision. Copyright © 2015 Elsevier B.V. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28371782','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28371782"><span>Recovering hidden diagonal structures via non-negative matrix factorization with multiple constraints.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Yang, Xi; Han, Guoqiang; Cai, Hongmin; Song, Yan</p> <p>2017-03-31</p> <p>Revealing data with intrinsically diagonal block structures is particularly useful for analyzing groups of highly correlated variables. Earlier researches based on non-negative matrix factorization (NMF) have been shown to be effective in representing such data by decomposing the observed data into two factors, where one factor is considered to be the feature and the other the expansion loading from a linear algebra perspective. If the data are sampled from multiple independent subspaces, the loading factor would possess a diagonal structure under an ideal matrix decomposition. However, the standard NMF method and its variants have not been reported to exploit this type of data via direct estimation. To address this issue, a non-negative matrix factorization with multiple constraints model is proposed in this paper. The constraints include an sparsity norm on the feature matrix and a total variational norm on each column of the loading matrix. The proposed model is shown to be capable of efficiently recovering diagonal block structures hidden in observed samples. An efficient numerical algorithm using the alternating direction method of multipliers model is proposed for optimizing the new model. Compared with several benchmark models, the proposed method performs robustly and effectively for simulated and real biological data.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/17106746','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/17106746"><span>Anisotropic plant growth due to phototropism.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Pietruszka, M; Lewicka, S</p> <p>2007-01-01</p> <p>Phototropism--the directional curvature of organs in response to lateral differences in light intensity and/or quality--represents one of the most rapid and visually obvious reaction of plants to changes in their light environment. It is a topic of fundamental interest to understand the mechanics of plants during growth. We propose a generalization of the scalar Lockhart model (1965) to three dimensional deformation, solve the new equation in two particular cases and compare results with empirical data. We believe that carefully designed experiments linked to our model will provide (by determining the active transport coefficient) a new method for qualitative description of auxin redistribution during phototropism. The proposed method supplements very recent investigations concerning specific auxin-influx and -efflux carriers (LAX and PIN proteins).</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19920007363','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19920007363"><span>Acquisition, representation and rule generation for procedural knowledge</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Ortiz, Chris; Saito, Tim; Mithal, Sachin; Loftin, R. Bowen</p> <p>1991-01-01</p> <p>Current research into the design and continuing development of a system for the acquisition of procedural knowledge, its representation in useful forms, and proposed methods for automated C Language Integrated Production System (CLIPS) rule generation is discussed. The Task Analysis and Rule Generation Tool (TARGET) is intended to permit experts, individually or collectively, to visually describe and refine procedural tasks. The system is designed to represent the acquired knowledge in the form of graphical objects with the capacity for generating production rules in CLIPS. The generated rules can then be integrated into applications such as NASA's Intelligent Computer Aided Training (ICAT) architecture. Also described are proposed methods for use in translating the graphical and intermediate knowledge representations into CLIPS rules.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018SPIE10609E..16C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018SPIE10609E..16C"><span>Action recognition in depth video from RGB perspective: A knowledge transfer manner</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Chen, Jun; Xiao, Yang; Cao, Zhiguo; Fang, Zhiwen</p> <p>2018-03-01</p> <p>Different video modal for human action recognition has becoming a highly promising trend in the video analysis. In this paper, we propose a method for human action recognition from RGB video to Depth video using domain adaptation, where we use learned feature from RGB videos to do action recognition for depth videos. More specifically, we make three steps for solving this problem in this paper. First, different from image, video is more complex as it has both spatial and temporal information, in order to better encode this information, dynamic image method is used to represent each RGB or Depth video to one image, based on this, most methods for extracting feature in image can be used in video. Secondly, as video can be represented as image, so standard CNN model can be used for training and testing for videos, beside, CNN model can be also used for feature extracting as its powerful feature expressing ability. Thirdly, as RGB videos and Depth videos are belong to two different domains, in order to make two different feature domains has more similarity, domain adaptation is firstly used for solving this problem between RGB and Depth video, based on this, the learned feature from RGB video model can be directly used for Depth video classification. We evaluate the proposed method on one complex RGB-D action dataset (NTU RGB-D), and our method can have more than 2% accuracy improvement using domain adaptation from RGB to Depth action recognition.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26173217','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26173217"><span>Context-Dependent Upper Limb Prosthesis Control for Natural and Robust Use.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Amsuess, Sebastian; Vujaklija, Ivan; Goebel, Peter; Roche, Aidan D; Graimann, Bernhard; Aszmann, Oskar C; Farina, Dario</p> <p>2016-07-01</p> <p>Pattern recognition and regression methods applied to the surface EMG have been used for estimating the user intended motor tasks across multiple degrees of freedom (DOF), for prosthetic control. While these methods are effective in several conditions, they are still characterized by some shortcomings. In this study we propose a methodology that combines these two approaches for mutually alleviating their limitations. This resulted in a control method capable of context-dependent movement estimation that switched automatically between sequential (one DOF at a time) or simultaneous (multiple DOF) prosthesis control, based on an online estimation of signal dimensionality. The proposed method was evaluated in scenarios close to real-life situations, with the control of a physical prosthesis in applied tasks of varying difficulties. Test prostheses were individually manufactured for both able-bodied and transradial amputee subjects. With these prostheses, two amputees performed the Southampton Hand Assessment Procedure test with scores of 58 and 71 points. The five able-bodied individuals performed standardized tests, such as the box&block and clothes pin test, reducing the completion times by up to 30%, with respect to using a state-of-the-art pure sequential control algorithm. Apart from facilitating fast simultaneous movements, the proposed control scheme was also more intuitive to use, since human movements are predominated by simultaneous activations across joints. The proposed method thus represents a significant step towards intelligent, intuitive and natural control of upper limb prostheses.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26316289','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26316289"><span>Automatic 2.5-D Facial Landmarking and Emotion Annotation for Social Interaction Assistance.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Zhao, Xi; Zou, Jianhua; Li, Huibin; Dellandrea, Emmanuel; Kakadiaris, Ioannis A; Chen, Liming</p> <p>2016-09-01</p> <p>People with low vision, Alzheimer's disease, and autism spectrum disorder experience difficulties in perceiving or interpreting facial expression of emotion in their social lives. Though automatic facial expression recognition (FER) methods on 2-D videos have been extensively investigated, their performance was constrained by challenges in head pose and lighting conditions. The shape information in 3-D facial data can reduce or even overcome these challenges. However, high expenses of 3-D cameras prevent their widespread use. Fortunately, 2.5-D facial data from emerging portable RGB-D cameras provide a good balance for this dilemma. In this paper, we propose an automatic emotion annotation solution on 2.5-D facial data collected from RGB-D cameras. The solution consists of a facial landmarking method and a FER method. Specifically, we propose building a deformable partial face model and fit the model to a 2.5-D face for localizing facial landmarks automatically. In FER, a novel action unit (AU) space-based FER method has been proposed. Facial features are extracted using landmarks and further represented as coordinates in the AU space, which are classified into facial expressions. Evaluated on three publicly accessible facial databases, namely EURECOM, FRGC, and Bosphorus databases, the proposed facial landmarking and expression recognition methods have achieved satisfactory results. Possible real-world applications using our algorithms have also been discussed.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28688489','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28688489"><span>A PCA aided cross-covariance scheme for discriminative feature extraction from EEG signals.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Zarei, Roozbeh; He, Jing; Siuly, Siuly; Zhang, Yanchun</p> <p>2017-07-01</p> <p>Feature extraction of EEG signals plays a significant role in Brain-computer interface (BCI) as it can significantly affect the performance and the computational time of the system. The main aim of the current work is to introduce an innovative algorithm for acquiring reliable discriminating features from EEG signals to improve classification performances and to reduce the time complexity. This study develops a robust feature extraction method combining the principal component analysis (PCA) and the cross-covariance technique (CCOV) for the extraction of discriminatory information from the mental states based on EEG signals in BCI applications. We apply the correlation based variable selection method with the best first search on the extracted features to identify the best feature set for characterizing the distribution of mental state signals. To verify the robustness of the proposed feature extraction method, three machine learning techniques: multilayer perceptron neural networks (MLP), least square support vector machine (LS-SVM), and logistic regression (LR) are employed on the obtained features. The proposed methods are evaluated on two publicly available datasets. Furthermore, we evaluate the performance of the proposed methods by comparing it with some recently reported algorithms. The experimental results show that all three classifiers achieve high performance (above 99% overall classification accuracy) for the proposed feature set. Among these classifiers, the MLP and LS-SVM methods yield the best performance for the obtained feature. The average sensitivity, specificity and classification accuracy for these two classifiers are same, which are 99.32%, 100%, and 99.66%, respectively for the BCI competition dataset IVa and 100%, 100%, and 100%, for the BCI competition dataset IVb. The results also indicate the proposed methods outperform the most recently reported methods by at least 0.25% average accuracy improvement in dataset IVa. The execution time results show that the proposed method has less time complexity after feature selection. The proposed feature extraction method is very effective for getting representatives information from mental states EEG signals in BCI applications and reducing the computational complexity of classifiers by reducing the number of extracted features. Copyright © 2017 Elsevier B.V. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4821494','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4821494"><span>A Map-Based Service Supporting Different Types of Geographic Knowledge for the Public</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Zhou, Mengjie; Wang, Rui; Tian, Jing; Ye, Ning; Mai, Shumin</p> <p>2016-01-01</p> <p>The internet enables the rapid and easy creation, storage, and transfer of knowledge; however, services that transfer geographic knowledge and facilitate the public understanding of geographic knowledge are still underdeveloped to date. Existing online maps (or atlases) can support limited types of geographic knowledge. In this study, we propose a framework for map-based services to represent and transfer different types of geographic knowledge to the public. A map-based service provides tools to ensure the effective transfer of geographic knowledge. We discuss the types of geographic knowledge that should be represented and transferred to the public, and we propose guidelines and a method to represent various types of knowledge through a map-based service. To facilitate the effective transfer of geographic knowledge, tools such as auxiliary background knowledge and auxiliary map-reading tools are provided through interactions with maps. An experiment conducted to illustrate our idea and to evaluate the usefulness of the map-based service is described; the results demonstrate that the map-based service is useful for transferring different types of geographic knowledge. PMID:27045314</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27045314','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27045314"><span>A Map-Based Service Supporting Different Types of Geographic Knowledge for the Public.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Zhou, Mengjie; Wang, Rui; Tian, Jing; Ye, Ning; Mai, Shumin</p> <p>2016-01-01</p> <p>The internet enables the rapid and easy creation, storage, and transfer of knowledge; however, services that transfer geographic knowledge and facilitate the public understanding of geographic knowledge are still underdeveloped to date. Existing online maps (or atlases) can support limited types of geographic knowledge. In this study, we propose a framework for map-based services to represent and transfer different types of geographic knowledge to the public. A map-based service provides tools to ensure the effective transfer of geographic knowledge. We discuss the types of geographic knowledge that should be represented and transferred to the public, and we propose guidelines and a method to represent various types of knowledge through a map-based service. To facilitate the effective transfer of geographic knowledge, tools such as auxiliary background knowledge and auxiliary map-reading tools are provided through interactions with maps. An experiment conducted to illustrate our idea and to evaluate the usefulness of the map-based service is described; the results demonstrate that the map-based service is useful for transferring different types of geographic knowledge.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=assessment+AND+skills&pg=7&id=EJ856673','ERIC'); return false;" href="https://eric.ed.gov/?q=assessment+AND+skills&pg=7&id=EJ856673"><span>Exploring Academic Achievement in Males Trained in Self-Assessment Skills</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>McDonald, Betty</p> <p>2009-01-01</p> <p>This paper examines academic achievement of males following formal training in self-assessment. It adds to current literature by proposing a tried-and-tested method of improving academic achievement in males at a time when they appear to be marginalised. The sample comprised 515 participants (233 males), representing 25.2% of that high school…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=medicine+AND+child&id=EJ816503','ERIC'); return false;" href="https://eric.ed.gov/?q=medicine+AND+child&id=EJ816503"><span>Training Pediatric Residents and Pediatricians about Adolescent Mental Health Problems: A Proof-of-Concept Pilot for a Proposed National Curriculum</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Kutner, Lawrence; Olson, Cheryl K.; Schlozman, Steven; Goldstein, Mark; Warner, Dorothy; Beresin, Eugene V.</p> <p>2008-01-01</p> <p>Objective: This article presents a DVD-based educational program intended to help pediatric residents and practicing pediatricians recognize and respond to adolescent depression in busy primary care settings. Methods: Representatives from pediatrics and adolescent medicine, child and adolescent psychiatry and psychology, and experts in the…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.fs.usda.gov/treesearch/pubs/31491','TREESEARCH'); return false;" href="https://www.fs.usda.gov/treesearch/pubs/31491"><span>Determination of ethylenic residues in wood and TMP of spruce by FT-Raman spectroscopy</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.fs.usda.gov/treesearch/">Treesearch</a></p> <p>Umesh P. Agarwal; Sally A. Ralph</p> <p>2008-01-01</p> <p>A method based on FT-Raman spectroscopy is proposed for determining in situ concentrations of ethylenic residues in softwood lignin. Raman contributions at 1133 and 1654 cm-1, representing coniferaldehyde and coniferyl alcohol structures, respectively, were used in quantifying these units in spruce wood with subsequent conversion to concentrations in lignin. For...</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li class="active"><span>19</span></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_19 --> <div id="page_20" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li class="active"><span>20</span></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="381"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25039125','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25039125"><span>[Nonparametric method of estimating survival functions containing right-censored and interval-censored data].</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Xu, Yonghong; Gao, Xiaohuan; Wang, Zhengxi</p> <p>2014-04-01</p> <p>Missing data represent a general problem in many scientific fields, especially in medical survival analysis. Dealing with censored data, interpolation method is one of important methods. However, most of the interpolation methods replace the censored data with the exact data, which will distort the real distribution of the censored data and reduce the probability of the real data falling into the interpolation data. In order to solve this problem, we in this paper propose a nonparametric method of estimating the survival function of right-censored and interval-censored data and compare its performance to SC (self-consistent) algorithm. Comparing to the average interpolation and the nearest neighbor interpolation method, the proposed method in this paper replaces the right-censored data with the interval-censored data, and greatly improves the probability of the real data falling into imputation interval. Then it bases on the empirical distribution theory to estimate the survival function of right-censored and interval-censored data. The results of numerical examples and a real breast cancer data set demonstrated that the proposed method had higher accuracy and better robustness for the different proportion of the censored data. This paper provides a good method to compare the clinical treatments performance with estimation of the survival data of the patients. This pro vides some help to the medical survival data analysis.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018IJTJE..35...81C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018IJTJE..35...81C"><span>A Novel Quasi-3D Method for Cascade Flow Considering Axial Velocity Density Ratio</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Chen, Zhiqiang; Zhou, Ming; Xu, Quanyong; Huang, Xudong</p> <p>2018-03-01</p> <p>A novel quasi-3D Computational Fluid Dynamics (CFD) method of mid-span flow simulation for compressor cascades is proposed. Two dimension (2D) Reynolds-Averaged Navier-Stokes (RANS) method is shown facing challenge in predicting mid-span flow with a unity Axial Velocity Density Ratio (AVDR). Three dimension (3D) RANS solution also shows distinct discrepancies if the AVDR is not predicted correctly. In this paper, 2D and 3D CFD results discrepancies are analyzed and a novel quasi-3D CFD method is proposed. The new quasi-3D model is derived by reducing 3D RANS Finite Volume Method (FVM) discretization over a one-spanwise-layer structured mesh cell. The sidewall effect is considered by two parts. The first part is explicit interface fluxes of mass, momentum and energy as well as turbulence. The second part is a cell boundary scaling factor representing sidewall boundary layer contraction. The performance of the novel quasi-3D method is validated on mid-span pressure distribution, pressure loss and shock prediction of two typical cascades. The results show good agreement with the experiment data on cascade SJ301-20 and cascade AC6-10 at all test condition. The proposed quasi-3D method shows superior accuracy over traditional 2D RANS method and 3D RANS method in performance prediction of compressor cascade.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017MSSP...93...51P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017MSSP...93...51P"><span>Semi-supervised vibration-based classification and condition monitoring of compressors</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Potočnik, Primož; Govekar, Edvard</p> <p>2017-09-01</p> <p>Semi-supervised vibration-based classification and condition monitoring of the reciprocating compressors installed in refrigeration appliances is proposed in this paper. The method addresses the problem of industrial condition monitoring where prior class definitions are often not available or difficult to obtain from local experts. The proposed method combines feature extraction, principal component analysis, and statistical analysis for the extraction of initial class representatives, and compares the capability of various classification methods, including discriminant analysis (DA), neural networks (NN), support vector machines (SVM), and extreme learning machines (ELM). The use of the method is demonstrated on a case study which was based on industrially acquired vibration measurements of reciprocating compressors during the production of refrigeration appliances. The paper presents a comparative qualitative analysis of the applied classifiers, confirming the good performance of several nonlinear classifiers. If the model parameters are properly selected, then very good classification performance can be obtained from NN trained by Bayesian regularization, SVM and ELM classifiers. The method can be effectively applied for the industrial condition monitoring of compressors.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013SPIE.9021E..0LY','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013SPIE.9021E..0LY"><span>A contour-based shape descriptor for biomedical image classification and retrieval</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>You, Daekeun; Antani, Sameer; Demner-Fushman, Dina; Thoma, George R.</p> <p>2013-12-01</p> <p>Contours, object blobs, and specific feature points are utilized to represent object shapes and extract shape descriptors that can then be used for object detection or image classification. In this research we develop a shape descriptor for biomedical image type (or, modality) classification. We adapt a feature extraction method used in optical character recognition (OCR) for character shape representation, and apply various image preprocessing methods to successfully adapt the method to our application. The proposed shape descriptor is applied to radiology images (e.g., MRI, CT, ultrasound, X-ray, etc.) to assess its usefulness for modality classification. In our experiment we compare our method with other visual descriptors such as CEDD, CLD, Tamura, and PHOG that extract color, texture, or shape information from images. The proposed method achieved the highest classification accuracy of 74.1% among all other individual descriptors in the test, and when combined with CSD (color structure descriptor) showed better performance (78.9%) than using the shape descriptor alone.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27775516','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27775516"><span>Person Re-Identification via Distance Metric Learning With Latent Variables.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Sun, Chong; Wang, Dong; Lu, Huchuan</p> <p>2017-01-01</p> <p>In this paper, we propose an effective person re-identification method with latent variables, which represents a pedestrian as the mixture of a holistic model and a number of flexible models. Three types of latent variables are introduced to model uncertain factors in the re-identification problem, including vertical misalignments, horizontal misalignments and leg posture variations. The distance between two pedestrians can be determined by minimizing a given distance function with respect to latent variables, and then be used to conduct the re-identification task. In addition, we develop a latent metric learning method for learning the effective metric matrix, which can be solved via an iterative manner: once latent information is specified, the metric matrix can be obtained based on some typical metric learning methods; with the computed metric matrix, the latent variables can be determined by searching the state space exhaustively. Finally, extensive experiments are conducted on seven databases to evaluate the proposed method. The experimental results demonstrate that our method achieves better performance than other competing algorithms.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/14709008','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/14709008"><span>A new test method for the evaluation of total antioxidant activity of herbal products.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Zaporozhets, Olga A; Krushynska, Olena A; Lipkovska, Natalia A; Barvinchenko, Valentina N</p> <p>2004-01-14</p> <p>A new test method for measuring the antioxidant power of herbal products, based on solid-phase spectrophotometry using tetrabenzo-[b,f,j,n][1,5,9,13]-tetraazacyclohexadecine-Cu(II) complex immobilized on silica gel, is proposed. The absorbance of the modified sorbent (lambda(max) = 712 nm) increases proportionally to the total antioxidant activity of the sample solution. The method represents an attractive alternative to the mostly used radical scavenging capacity assays, because they generally require complex long-lasting stages to be carried out. The proposed test method is simple ("drop and measure" procedure is applied), rapid (10 min/sample), requires only the monitoring of time and absorbance, and provides good statistical parameters (s(r)<or= 0.2). The method was approved in the analysis of the most popular herbal beverages (black and green teas) and drugs (Echinacea products). The results obtained were compared with the content of total flavonoids and tannins of teas and total caffeic acid derivatives of Echinacea, determined spectrophotometrically.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017MS%26E..220a2041C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017MS%26E..220a2041C"><span>Servo-controlling structure of five-axis CNC system for real-time NURBS interpolating</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Chen, Liangji; Guo, Guangsong; Li, Huiying</p> <p>2017-07-01</p> <p>NURBS (Non-Uniform Rational B-Spline) is widely used in CAD/CAM (Computer-Aided Design / Computer-Aided Manufacturing) to represent sculptured curves or surfaces. In this paper, we develop a 5-axis NURBS real-time interpolator and realize it in our developing CNC(Computer Numerical Control) system. At first, we use two NURBS curves to represent tool-tip and tool-axis path respectively. According to feedrate and Taylor series extension, servo-controlling signals of 5 axes are obtained for each interpolating cycle. Then, generation procedure of NC(Numerical Control) code with the presented method is introduced and the method how to integrate the interpolator into our developing CNC system is given. And also, the servo-controlling structure of the CNC system is introduced. Through the illustration, it has been indicated that the proposed method can enhance the machining accuracy and the spline interpolator is feasible for 5-axis CNC system.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017ApSS..421..453G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017ApSS..421..453G"><span>Spectroscopic ellipsometry data inversion using constrained splines and application to characterization of ZnO with various morphologies</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Gilliot, Mickaël; Hadjadj, Aomar; Stchakovsky, Michel</p> <p>2017-11-01</p> <p>An original method of ellipsometric data inversion is proposed based on the use of constrained splines. The imaginary part of the dielectric function is represented by a series of splines, constructed with particular constraints on slopes at the node boundaries to avoid well-know oscillations of natural splines. The nodes are used as fit parameters. The real part is calculated using Kramers-Kronig relations. The inversion can be performed in successive inversion steps with increasing resolution. This method is used to characterize thin zinc oxide layers obtained by a sol-gel and spin-coating process, with a particular recipe yielding very thin layers presenting nano-porosity. Such layers have particular optical properties correlated with thickness, morphological and structural properties. The use of the constrained spline method is particularly efficient for such materials which may not be easily represented by standard dielectric function models.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/14996342','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/14996342"><span>Validity threats: overcoming interference with proposed interpretations of assessment data.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Downing, Steven M; Haladyna, Thomas M</p> <p>2004-03-01</p> <p>Factors that interfere with the ability to interpret assessment scores or ratings in the proposed manner threaten validity. To be interpreted in a meaningful manner, all assessments in medical education require sound, scientific evidence of validity. The purpose of this essay is to discuss 2 major threats to validity: construct under-representation (CU) and construct-irrelevant variance (CIV). Examples of each type of threat for written, performance and clinical performance examinations are provided. The CU threat to validity refers to undersampling the content domain. Using too few items, cases or clinical performance observations to adequately generalise to the domain represents CU. Variables that systematically (rather than randomly) interfere with the ability to meaningfully interpret scores or ratings represent CIV. Issues such as flawed test items written at inappropriate reading levels or statistically biased questions represent CIV in written tests. For performance examinations, such as standardised patient examinations, flawed cases or cases that are too difficult for student ability contribute CIV to the assessment. For clinical performance data, systematic rater error, such as halo or central tendency error, represents CIV. The term face validity is rejected as representative of any type of legitimate validity evidence, although the fact that the appearance of the assessment may be an important characteristic other than validity is acknowledged. There are multiple threats to validity in all types of assessment in medical education. Methods to eliminate or control validity threats are suggested.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016AGUFM.H53H1807D','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016AGUFM.H53H1807D"><span>Digital Elevation Model Correction for the thalweg values of Obion River system, TN</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Dullo, T. T.; Bhuyian, M. N. M.; Hawkins, S. A.; Kalyanapu, A. J.</p> <p>2016-12-01</p> <p>Obion River system is located in North-West Tennessee and discharges into the Mississippi River. To facilitate US Department of Agriculture (USDA) to estimate water availability for agricultural consumption a one-dimensional HEC-RAS model has been proposed. The model incorporates the major tributaries (north and south), main stem of Obion River along with a segment of the Mississippi River. A one-meter spatial resolution Light Detection and Ranging (LiDAR) derived Digital Elevation Model (DEM) was used as the primary source of topographic data. LiDAR provides fine-resolution terrain data over given extent. However, it lacks in accurate representation of river bathymetry due to limited penetration beyond a certain water depth. This reduces the conveyance along river channel as represented by the DEM and affects the hydrodynamic modeling performance. This research focused on proposing a method to overcome this issue and test the qualitative improvement by the proposed method over an existing technique. Therefore, objective of this research is to compare effectiveness of a HEC-RAS based bathymetry optimization method with an existing hydraulic based DEM correction technique (Bhuyian et al., 2014) for Obion River system in Tennessee. Accuracy of hydrodynamic simulations (upon employing bathymetry from respective sources) would be regarded as the indicator of performance. The aforementioned river system includes nine major reaches with a total river length of 310 km. The bathymetry of the river was represented via 315 cross sections equally spaced at about one km. This study targeted to selecting best practice for treating LiDAR based terrain data over complex river system at a sub-watershed scale.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29663111','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29663111"><span>Acceptability and Feasibility of a Shared Decision-Making Model in Work Rehabilitation: A Mixed-Methods Study of Stakeholders' Perspectives.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Coutu, Marie-France; Légaré, France; Durand, Marie-José; Stacey, Dawn; Labrecque, Marie-Elise; Corbière, Marc; Bainbridge, Lesley</p> <p>2018-04-16</p> <p>Purpose To establish the acceptability and feasibility of implementing a shared decision-making (SDM) model in work rehabilitation. Methods We used a sequential mixed-methods design with diverse stakeholder groups (representatives of private and public employers, insurers, and unions, as well as workers having participated in a work rehabilitation program). First, a survey using a self-administered questionnaire enabled stakeholders to rate their level of agreement with the model's acceptability and feasibility and propose modifications, if necessary. Second, eight focus groups representing key stakeholders (n = 34) and four one-on-one interviews with workers were conducted, based on the questionnaire results. For each stakeholder group, we computed the percentage of agreement with the model's acceptability and feasibility and performed thematic analyses of the transcripts. Results Less than 50% of each stakeholder group initially agreed with the overall acceptability and feasibility of the model. Stakeholders proposed 37 modifications to the objectives, 17 to the activities, and 39 to improve the model's feasibility. Based on in-depth analysis of the transcripts, indicators were added to one objective, an interview guide was added as proposed by insurers to ensure compliance of the SDM process with insurance contract requirements, and one objective was reformulated. Conclusion Despite initially low agreement with the model's acceptability on the survey, subsequent discussions led to three minor changes and contributed to the model's ultimate acceptability and feasibility. Later steps will involve assessing the extent of implementation of the model in real rehabilitation settings to see if other modifications are necessary before assessing its impact.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018MSSP..101..254G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018MSSP..101..254G"><span>A dynamic load estimation method for nonlinear structures with unscented Kalman filter</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Guo, L. N.; Ding, Y.; Wang, Z.; Xu, G. S.; Wu, B.</p> <p>2018-02-01</p> <p>A force estimation method is proposed for hysteretic nonlinear structures. The equation of motion for the nonlinear structure is represented in state space and the state variable is augmented by the unknown the time history of external force. Unscented Kalman filter (UKF) is improved for the force identification in state space considering the ill-condition characteristic in the computation of square roots for the covariance matrix. The proposed method is firstly validated by a numerical simulation study of a 3-storey nonlinear hysteretic frame excited by periodic force. Each storey is supposed to follow a nonlinear hysteretic model. The external force is identified and the measurement noise is considered in this case. Then a case of a seismically isolated building subjected to earthquake excitation and impact force is studied. The isolation layer performs nonlinearly during the earthquake excitation. Impact force between the seismically isolated structure and the retaining wall is estimated with the proposed method. Uncertainties such as measurement noise, model error in storey stiffness and unexpected environmental disturbances are considered. A real-time substructure testing of an isolated structure is conducted to verify the proposed method. In the experimental study, the linear main structure is taken as numerical substructure while the one of the isolations with additional mass is taken as the nonlinear physical substructure. The force applied by the actuator on the physical substructure is identified and compared with the measured value from the force transducer. The method proposed in this paper is also validated by shaking table test of a seismically isolated steel frame. The acceleration of the ground motion as the unknowns is identified by the proposed method. Results from both numerical simulation and experimental studies indicate that the UKF based force identification method can be used to identify external excitations effectively for the nonlinear structure with accurate results even with measurement noise, model error and environmental disturbances.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017JHyd..547..517S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017JHyd..547..517S"><span>Analytical solutions for solute transport in groundwater and riverine flow using Green's Function Method and pertinent coordinate transformation method</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Sanskrityayn, Abhishek; Suk, Heejun; Kumar, Naveen</p> <p>2017-04-01</p> <p>In this study, analytical solutions of one-dimensional pollutant transport originating from instantaneous and continuous point sources were developed in groundwater and riverine flow using both Green's Function Method (GFM) and pertinent coordinate transformation method. Dispersion coefficient and flow velocity are considered spatially and temporally dependent. The spatial dependence of the velocity is linear, non-homogeneous and that of dispersion coefficient is square of that of velocity, while the temporal dependence is considered linear, exponentially and asymptotically decelerating and accelerating. Our proposed analytical solutions are derived for three different situations depending on variations of dispersion coefficient and velocity, respectively which can represent real physical processes occurring in groundwater and riverine systems. First case refers to steady solute transport situation in steady flow in which dispersion coefficient and velocity are only spatially dependent. The second case represents transient solute transport in steady flow in which dispersion coefficient is spatially and temporally dependent while the velocity is spatially dependent. Finally, the third case indicates transient solute transport in unsteady flow in which both dispersion coefficient and velocity are spatially and temporally dependent. The present paper demonstrates the concentration distribution behavior from a point source in realistically occurring flow domains of hydrological systems including groundwater and riverine water in which the dispersivity of pollutant's mass is affected by heterogeneity of the medium as well as by other factors like velocity fluctuations, while velocity is influenced by water table slope and recharge rate. Such capabilities give the proposed method's superiority about application of various hydrological problems to be solved over other previously existing analytical solutions. Especially, to author's knowledge, any other solution doesn't exist for both spatially and temporally variations of dispersion coefficient and velocity. In this study, the existing analytical solutions from previous widely known studies are used for comparison as validation tools to verify the proposed analytical solution as well as the numerical code of the Two-Dimensional Subsurface Flow, Fate and Transport of Microbes and Chemicals (2DFATMIC) code and the developed 1D finite difference code (FDM). All such solutions show perfect match with the respective proposed solutions.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015PhyA..436..462M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015PhyA..436..462M"><span>An effective trust-based recommendation method using a novel graph clustering algorithm</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Moradi, Parham; Ahmadian, Sajad; Akhlaghian, Fardin</p> <p>2015-10-01</p> <p>Recommender systems are programs that aim to provide personalized recommendations to users for specific items (e.g. music, books) in online sharing communities or on e-commerce sites. Collaborative filtering methods are important and widely accepted types of recommender systems that generate recommendations based on the ratings of like-minded users. On the other hand, these systems confront several inherent issues such as data sparsity and cold start problems, caused by fewer ratings against the unknowns that need to be predicted. Incorporating trust information into the collaborative filtering systems is an attractive approach to resolve these problems. In this paper, we present a model-based collaborative filtering method by applying a novel graph clustering algorithm and also considering trust statements. In the proposed method first of all, the problem space is represented as a graph and then a sparsest subgraph finding algorithm is applied on the graph to find the initial cluster centers. Then, the proposed graph clustering algorithm is performed to obtain the appropriate users/items clusters. Finally, the identified clusters are used as a set of neighbors to recommend unseen items to the current active user. Experimental results based on three real-world datasets demonstrate that the proposed method outperforms several state-of-the-art recommender system methods.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23070305','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23070305"><span>Probabilistic graphlet transfer for photo cropping.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Zhang, Luming; Song, Mingli; Zhao, Qi; Liu, Xiao; Bu, Jiajun; Chen, Chun</p> <p>2013-02-01</p> <p>As one of the most basic photo manipulation processes, photo cropping is widely used in the printing, graphic design, and photography industries. In this paper, we introduce graphlets (i.e., small connected subgraphs) to represent a photo's aesthetic features, and propose a probabilistic model to transfer aesthetic features from the training photo onto the cropped photo. In particular, by segmenting each photo into a set of regions, we construct a region adjacency graph (RAG) to represent the global aesthetic feature of each photo. Graphlets are then extracted from the RAGs, and these graphlets capture the local aesthetic features of the photos. Finally, we cast photo cropping as a candidate-searching procedure on the basis of a probabilistic model, and infer the parameters of the cropped photos using Gibbs sampling. The proposed method is fully automatic. Subjective evaluations have shown that it is preferred over a number of existing approaches.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/22534912','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/22534912"><span>Two-dimensional wavelet transform for reliability-guided phase unwrapping in optical fringe pattern analysis.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Li, Sikun; Wang, Xiangzhao; Su, Xianyu; Tang, Feng</p> <p>2012-04-20</p> <p>This paper theoretically discusses modulus of two-dimensional (2D) wavelet transform (WT) coefficients, calculated by using two frequently used 2D daughter wavelet definitions, in an optical fringe pattern analysis. The discussion shows that neither is good enough to represent the reliability of the phase data. The differences between the two frequently used 2D daughter wavelet definitions in the performance of 2D WT also are discussed. We propose a new 2D daughter wavelet definition for reliability-guided phase unwrapping of optical fringe pattern. The modulus of the advanced 2D WT coefficients, obtained by using a daughter wavelet under this new daughter wavelet definition, includes not only modulation information but also local frequency information of the deformed fringe pattern. Therefore, it can be treated as a good parameter that represents the reliability of the retrieved phase data. Computer simulation and experimentation show the validity of the proposed method.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4236968','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4236968"><span>A Framework for Spatial Interaction Analysis Based on Large-Scale Mobile Phone Data</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Li, Weifeng; Cheng, Xiaoyun; Guo, Gaohua</p> <p>2014-01-01</p> <p>The overall understanding of spatial interaction and the exact knowledge of its dynamic evolution are required in the urban planning and transportation planning. This study aimed to analyze the spatial interaction based on the large-scale mobile phone data. The newly arisen mass dataset required a new methodology which was compatible with its peculiar characteristics. A three-stage framework was proposed in this paper, including data preprocessing, critical activity identification, and spatial interaction measurement. The proposed framework introduced the frequent pattern mining and measured the spatial interaction by the obtained association. A case study of three communities in Shanghai was carried out as verification of proposed method and demonstration of its practical application. The spatial interaction patterns and the representative features proved the rationality of the proposed framework. PMID:25435865</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017MS%26E..199a2061W','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017MS%26E..199a2061W"><span>Three-phase Power Flow Calculation of Low Voltage Distribution Network Considering Characteristics of Residents Load</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Wang, Yaping; Lin, Shunjiang; Yang, Zhibin</p> <p>2017-05-01</p> <p>In the traditional three-phase power flow calculation of the low voltage distribution network, the load model is described as constant power. Since this model cannot reflect the characteristics of actual loads, the result of the traditional calculation is always different from the actual situation. In this paper, the load model in which dynamic load represented by air conditioners parallel with static load represented by lighting loads is used to describe characteristics of residents load, and the three-phase power flow calculation model is proposed. The power flow calculation model includes the power balance equations of three-phase (A,B,C), the current balance equations of phase 0, and the torque balancing equations of induction motors in air conditioners. And then an alternating iterative algorithm of induction motor torque balance equations with each node balance equations is proposed to solve the three-phase power flow model. This method is applied to an actual low voltage distribution network of residents load, and by the calculation of three different operating states of air conditioners, the result demonstrates the effectiveness of the proposed model and the algorithm.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25706507','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25706507"><span>A Complete Color Normalization Approach to Histopathology Images Using Color Cues Computed From Saturation-Weighted Statistics.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Li, Xingyu; Plataniotis, Konstantinos N</p> <p>2015-07-01</p> <p>In digital histopathology, tasks of segmentation and disease diagnosis are achieved by quantitative analysis of image content. However, color variation in image samples makes it challenging to produce reliable results. This paper introduces a complete normalization scheme to address the problem of color variation in histopathology images jointly caused by inconsistent biopsy staining and nonstandard imaging condition. Method : Different from existing normalization methods that either address partial cause of color variation or lump them together, our method identifies causes of color variation based on a microscopic imaging model and addresses inconsistency in biopsy imaging and staining by an illuminant normalization module and a spectral normalization module, respectively. In evaluation, we use two public datasets that are representative of histopathology images commonly received in clinics to examine the proposed method from the aspects of robustness to system settings, performance consistency against achromatic pixels, and normalization effectiveness in terms of histological information preservation. As the saturation-weighted statistics proposed in this study generates stable and reliable color cues for stain normalization, our scheme is robust to system parameters and insensitive to image content and achromatic colors. Extensive experimentation suggests that our approach outperforms state-of-the-art normalization methods as the proposed method is the only approach that succeeds to preserve histological information after normalization. The proposed color normalization solution would be useful to mitigate effects of color variation in pathology images on subsequent quantitative analysis.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28807553','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28807553"><span>Coronary artery segmentation in X-ray angiograms using gabor filters and differential evolution.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Cervantes-Sanchez, Fernando; Cruz-Aceves, Ivan; Hernandez-Aguirre, Arturo; Solorio-Meza, Sergio; Cordova-Fraga, Teodoro; Aviña-Cervantes, Juan Gabriel</p> <p>2018-08-01</p> <p>Segmentation of coronary arteries in X-ray angiograms represents an essential task for computer-aided diagnosis, since it can help cardiologists in diagnosing and monitoring vascular abnormalities. Due to the main disadvantages of the X-ray angiograms are the nonuniform illumination, and the weak contrast between blood vessels and image background, different vessel enhancement methods have been introduced. In this paper, a novel method for blood vessel enhancement based on Gabor filters tuned using the optimization strategy of Differential evolution (DE) is proposed. Because the Gabor filters are governed by three different parameters, the optimal selection of those parameters is highly desirable in order to maximize the vessel detection rate while reducing the computational cost of the training stage. To obtain the optimal set of parameters for the Gabor filters, the area (Az) under the receiver operating characteristics curve is used as objective function. In the experimental results, the proposed method achieves an A z =0.9388 in a training set of 40 images, and for a test set of 40 images it obtains the highest performance with an A z =0.9538 compared with six state-of-the-art vessel detection methods. Finally, the proposed method achieves an accuracy of 0.9423 for vessel segmentation using the test set. In addition, the experimental results have also shown that the proposed method can be highly suitable for clinical decision support in terms of computational time and vessel segmentation performance. Copyright © 2017 Elsevier Ltd. All rights reserved.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li class="active"><span>20</span></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_20 --> <div id="page_21" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li class="active"><span>21</span></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="401"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23757556','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23757556"><span>Alternatively Constrained Dictionary Learning For Image Superresolution.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Lu, Xiaoqiang; Yuan, Yuan; Yan, Pingkun</p> <p>2014-03-01</p> <p>Dictionaries are crucial in sparse coding-based algorithm for image superresolution. Sparse coding is a typical unsupervised learning method to study the relationship between the patches of high-and low-resolution images. However, most of the sparse coding methods for image superresolution fail to simultaneously consider the geometrical structure of the dictionary and the corresponding coefficients, which may result in noticeable superresolution reconstruction artifacts. In other words, when a low-resolution image and its corresponding high-resolution image are represented in their feature spaces, the two sets of dictionaries and the obtained coefficients have intrinsic links, which has not yet been well studied. Motivated by the development on nonlocal self-similarity and manifold learning, a novel sparse coding method is reported to preserve the geometrical structure of the dictionary and the sparse coefficients of the data. Moreover, the proposed method can preserve the incoherence of dictionary entries and provide the sparse coefficients and learned dictionary from a new perspective, which have both reconstruction and discrimination properties to enhance the learning performance. Furthermore, to utilize the model of the proposed method more effectively for single-image superresolution, this paper also proposes a novel dictionary-pair learning method, which is named as two-stage dictionary training. Extensive experiments are carried out on a large set of images comparing with other popular algorithms for the same purpose, and the results clearly demonstrate the effectiveness of the proposed sparse representation model and the corresponding dictionary learning algorithm.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/22595208','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/22595208"><span>Robust identification of transcriptional regulatory networks using a Gibbs sampler on outlier sum statistic.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Gu, Jinghua; Xuan, Jianhua; Riggins, Rebecca B; Chen, Li; Wang, Yue; Clarke, Robert</p> <p>2012-08-01</p> <p>Identification of transcriptional regulatory networks (TRNs) is of significant importance in computational biology for cancer research, providing a critical building block to unravel disease pathways. However, existing methods for TRN identification suffer from the inclusion of excessive 'noise' in microarray data and false-positives in binding data, especially when applied to human tumor-derived cell line studies. More robust methods that can counteract the imperfection of data sources are therefore needed for reliable identification of TRNs in this context. In this article, we propose to establish a link between the quality of one target gene to represent its regulator and the uncertainty of its expression to represent other target genes. Specifically, an outlier sum statistic was used to measure the aggregated evidence for regulation events between target genes and their corresponding transcription factors. A Gibbs sampling method was then developed to estimate the marginal distribution of the outlier sum statistic, hence, to uncover underlying regulatory relationships. To evaluate the effectiveness of our proposed method, we compared its performance with that of an existing sampling-based method using both simulation data and yeast cell cycle data. The experimental results show that our method consistently outperforms the competing method in different settings of signal-to-noise ratio and network topology, indicating its robustness for biological applications. Finally, we applied our method to breast cancer cell line data and demonstrated its ability to extract biologically meaningful regulatory modules related to estrogen signaling and action in breast cancer. The Gibbs sampler MATLAB package is freely available at http://www.cbil.ece.vt.edu/software.htm. xuan@vt.edu Supplementary data are available at Bioinformatics online.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3400952','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3400952"><span>Robust identification of transcriptional regulatory networks using a Gibbs sampler on outlier sum statistic</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Gu, Jinghua; Xuan, Jianhua; Riggins, Rebecca B.; Chen, Li; Wang, Yue; Clarke, Robert</p> <p>2012-01-01</p> <p>Motivation: Identification of transcriptional regulatory networks (TRNs) is of significant importance in computational biology for cancer research, providing a critical building block to unravel disease pathways. However, existing methods for TRN identification suffer from the inclusion of excessive ‘noise’ in microarray data and false-positives in binding data, especially when applied to human tumor-derived cell line studies. More robust methods that can counteract the imperfection of data sources are therefore needed for reliable identification of TRNs in this context. Results: In this article, we propose to establish a link between the quality of one target gene to represent its regulator and the uncertainty of its expression to represent other target genes. Specifically, an outlier sum statistic was used to measure the aggregated evidence for regulation events between target genes and their corresponding transcription factors. A Gibbs sampling method was then developed to estimate the marginal distribution of the outlier sum statistic, hence, to uncover underlying regulatory relationships. To evaluate the effectiveness of our proposed method, we compared its performance with that of an existing sampling-based method using both simulation data and yeast cell cycle data. The experimental results show that our method consistently outperforms the competing method in different settings of signal-to-noise ratio and network topology, indicating its robustness for biological applications. Finally, we applied our method to breast cancer cell line data and demonstrated its ability to extract biologically meaningful regulatory modules related to estrogen signaling and action in breast cancer. Availability and implementation: The Gibbs sampler MATLAB package is freely available at http://www.cbil.ece.vt.edu/software.htm. Contact: xuan@vt.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:22595208</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017WRR....5310701B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017WRR....5310701B"><span>Probabilistic Design Storm Method for Improved Flood Estimation in Ungauged Catchments</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Berk, Mario; Å pačková, Olga; Straub, Daniel</p> <p>2017-12-01</p> <p>The design storm approach with event-based rainfall-runoff models is a standard method for design flood estimation in ungauged catchments. The approach is conceptually simple and computationally inexpensive, but the underlying assumptions can lead to flawed design flood estimations. In particular, the implied average recurrence interval (ARI) neutrality between rainfall and runoff neglects uncertainty in other important parameters, leading to an underestimation of design floods. The selection of a single representative critical rainfall duration in the analysis leads to an additional underestimation of design floods. One way to overcome these nonconservative approximations is the use of a continuous rainfall-runoff model, which is associated with significant computational cost and requires rainfall input data that are often not readily available. As an alternative, we propose a novel Probabilistic Design Storm method that combines event-based flood modeling with basic probabilistic models and concepts from reliability analysis, in particular the First-Order Reliability Method (FORM). The proposed methodology overcomes the limitations of the standard design storm approach, while utilizing the same input information and models without excessive computational effort. Additionally, the Probabilistic Design Storm method allows deriving so-called design charts, which summarize representative design storm events (combinations of rainfall intensity and other relevant parameters) for floods with different return periods. These can be used to study the relationship between rainfall and runoff return periods. We demonstrate, investigate, and validate the method by means of an example catchment located in the Bavarian Pre-Alps, in combination with a simple hydrological model commonly used in practice.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/ADA435695','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/ADA435695"><span>Characterization of New Materials for Photovoltaic Thin Films: Aggregation Phenomena in Self-Assembled Perylene-Based Diimides</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.dtic.mil/">DTIC Science & Technology</a></p> <p></p> <p>2005-07-21</p> <p>or solution-based methods such as spin casting or drop casting,’ 1ś󈧖 self-assembly,1922 Langmuir - Blodgett techniques,23 or electrochemical methods...and Langmuir - exist. Molecules containing a perylene diimide core have Blodgett techniques.’ 8 In many situations, the molecules also been proposed for...remain soluble in the W. J. Langmuir 1996, 12, 2169. absence of other ionic species. These systems represent (35) Antonietti, M.; Conrad, J. Angew</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016SPIE.9887E..1YA','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016SPIE.9887E..1YA"><span>Blood proteins analysis by Raman spectroscopy method</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Artemyev, D. N.; Bratchenko, I. A.; Khristoforova, Yu. A.; Lykina, A. A.; Myakinin, O. O.; Kuzmina, T. P.; Davydkin, I. L.; Zakharov, V. P.</p> <p>2016-04-01</p> <p>This work is devoted to study the possibility of plasma proteins (albumin, globulins) concentration measurement using Raman spectroscopy setup. The blood plasma and whole blood were studied in this research. The obtained Raman spectra showed significant variation of intensities of certain spectral bands 940, 1005, 1330, 1450 and 1650 cm-1 for different protein fractions. Partial least squares regression analysis was used for determination of correlation coefficients. We have shown that the proposed method represents the structure and biochemical composition of major blood proteins.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/21020290-modeling-bose-einstein-correlations-via-elementary-emitting-cells','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/21020290-modeling-bose-einstein-correlations-via-elementary-emitting-cells"><span>Modeling Bose-Einstein correlations via elementary emitting cells</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Utyuzh, Oleg; Wilk, Grzegorz; Wlodarczyk, Zbigniew</p> <p>2007-04-01</p> <p>We propose a method of numerical modeling Bose-Einstein correlations by using the notion of the elementary emitting cell (EEC). They are intermediary objects containing identical bosons and are supposed to be produced independently during the hadronization process. Only bosons in the EEC, which represents a single quantum state here, are subjected to the effects of Bose-Einstein (BE) statistics, which forces them to follow a geometrical distribution. There are no such effects between particles from different EECs. We illustrate our proposition by calculating a representative number of typical distributions and discussing their sensitivity to EECs and their characteristics.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27837909','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27837909"><span>A semi-automated Raman micro-spectroscopy method for morphological and chemical characterizations of microplastic litter.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>L, Frère; I, Paul-Pont; J, Moreau; P, Soudant; C, Lambert; A, Huvet; E, Rinnert</p> <p>2016-12-15</p> <p>Every step of microplastic analysis (collection, extraction and characterization) is time-consuming, representing an obstacle to the implementation of large scale monitoring. This study proposes a semi-automated Raman micro-spectroscopy method coupled to static image analysis that allows the screening of a large quantity of microplastic in a time-effective way with minimal machine operator intervention. The method was validated using 103 particles collected at the sea surface spiked with 7 standard plastics: morphological and chemical characterization of particles was performed in <3h. The method was then applied to a larger environmental sample (n=962 particles). The identification rate was 75% and significantly decreased as a function of particle size. Microplastics represented 71% of the identified particles and significant size differences were observed: polystyrene was mainly found in the 2-5mm range (59%), polyethylene in the 1-2mm range (40%) and polypropylene in the 0.335-1mm range (42%). Copyright © 2016 Elsevier Ltd. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3447359','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3447359"><span>Spongiosa Primary Development: A Biochemical Hypothesis by Turing Patterns Formations</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>López-Vaca, Oscar Rodrigo; Garzón-Alvarado, Diego Alexander</p> <p>2012-01-01</p> <p>We propose a biochemical model describing the formation of primary spongiosa architecture through a bioregulatory model by metalloproteinase 13 (MMP13) and vascular endothelial growth factor (VEGF). It is assumed that MMP13 regulates cartilage degradation and the VEGF allows vascularization and advances in the ossification front through the presence of osteoblasts. The coupling of this set of molecules is represented by reaction-diffusion equations with parameters in the Turing space, creating a stable spatiotemporal pattern that leads to the formation of the trabeculae present in the spongy tissue. Experimental evidence has shown that the MMP13 regulates VEGF formation, and it is assumed that VEGF negatively regulates MMP13 formation. Thus, the patterns obtained by ossification may represent the primary spongiosa formation during endochondral ossification. Moreover, for the numerical solution, we used the finite element method with the Newton-Raphson method to approximate partial differential nonlinear equations. Ossification patterns obtained may represent the primary spongiosa formation during endochondral ossification. PMID:23193429</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4047992','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4047992"><span>Transactional Database Transformation and Its Application in Prioritizing Human Disease Genes</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Xiang, Yang; Payne, Philip R.O.; Huang, Kun</p> <p>2013-01-01</p> <p>Binary (0,1) matrices, commonly known as transactional databases, can represent many application data, including gene-phenotype data where “1” represents a confirmed gene-phenotype relation and “0” represents an unknown relation. It is natural to ask what information is hidden behind these “0”s and “1”s. Unfortunately, recent matrix completion methods, though very effective in many cases, are less likely to infer something interesting from these (0,1)-matrices. To answer this challenge, we propose IndEvi, a very succinct and effective algorithm to perform independent-evidence-based transactional database transformation. Each entry of a (0,1)-matrix is evaluated by “independent evidence” (maximal supporting patterns) extracted from the whole matrix for this entry. The value of an entry, regardless of its value as 0 or 1, has completely no effect for its independent evidence. The experiment on a gene-phenotype database shows that our method is highly promising in ranking candidate genes and predicting unknown disease genes. PMID:21422495</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4480776','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4480776"><span>A Method to Represent Heterogeneous Materials for Rapid Prototyping: The Matryoshka Approach</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Lei, Shuangyan; Frank, Matthew C.; Anderson, Donald D.; Brown, Thomas D.</p> <p>2015-01-01</p> <p>Purpose The purpose of this paper is to present a new method for representing heterogeneous materials using nested STL shells, based, in particular, on the density distributions of human bones. Design/methodology/approach Nested STL shells, called Matryoshka models, are described, based on their namesake Russian nesting dolls. In this approach, polygonal models, such as STL shells, are “stacked” inside one another to represent different material regions. The Matryoshka model addresses the challenge of representing different densities and different types of bone when reverse engineering from medical images. The Matryoshka model is generated via an iterative process of thresholding the Hounsfield Unit (HU) data using computed tomography (CT), thereby delineating regions of progressively increasing bone density. These nested shells can represent regions starting with the medullary (bone marrow) canal, up through and including the outer surface of the bone. Findings The Matryoshka approach introduced can be used to generate accurate models of heterogeneous materials in an automated fashion, avoiding the challenge of hand-creating an assembly model for input to multi-material additive or subtractive manufacturing. Originality/Value This paper presents a new method for describing heterogeneous materials: in this case, the density distribution in a human bone. The authors show how the Matryoshka model can be used to plan harvesting locations for creating custom rapid allograft bone implants from donor bone. An implementation of a proposed harvesting method is demonstrated, followed by a case study using subtractive rapid prototyping to harvest a bone implant from a human tibia surrogate. PMID:26120277</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28539124','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28539124"><span>Disease causality extraction based on lexical semantics and document-clause frequency from biomedical literature.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Lee, Dong-Gi; Shin, Hyunjung</p> <p>2017-05-18</p> <p>Recently, research on human disease network has succeeded and has become an aid in figuring out the relationship between various diseases. In most disease networks, however, the relationship between diseases has been simply represented as an association. This representation results in the difficulty of identifying prior diseases and their influence on posterior diseases. In this paper, we propose a causal disease network that implements disease causality through text mining on biomedical literature. To identify the causality between diseases, the proposed method includes two schemes: the first is the lexicon-based causality term strength, which provides the causal strength on a variety of causality terms based on lexicon analysis. The second is the frequency-based causality strength, which determines the direction and strength of causality based on document and clause frequencies in the literature. We applied the proposed method to 6,617,833 PubMed literature, and chose 195 diseases to construct a causal disease network. From all possible pairs of disease nodes in the network, 1011 causal pairs of 149 diseases were extracted. The resulting network was compared with that of a previous study. In terms of both coverage and quality, the proposed method showed outperforming results; it determined 2.7 times more causalities and showed higher correlation with associated diseases than the existing method. This research has novelty in which the proposed method circumvents the limitations of time and cost in applying all possible causalities in biological experiments and it is a more advanced text mining technique by defining the concepts of causality term strength.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2007ChOpL...5..160L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2007ChOpL...5..160L"><span>Finger crease pattern recognition using Legendre moments and principal component analysis</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Luo, Rongfang; Lin, Tusheng</p> <p>2007-03-01</p> <p>The finger joint lines defined as finger creases and its distribution can identify a person. In this paper, we propose a new finger crease pattern recognition method based on Legendre moments and principal component analysis (PCA). After obtaining the region of interest (ROI) for each finger image in the pre-processing stage, Legendre moments under Radon transform are applied to construct a moment feature matrix from the ROI, which greatly decreases the dimensionality of ROI and can represent principal components of the finger creases quite well. Then, an approach to finger crease pattern recognition is designed based on Karhunen-Loeve (K-L) transform. The method applies PCA to a moment feature matrix rather than the original image matrix to achieve the feature vector. The proposed method has been tested on a database of 824 images from 103 individuals using the nearest neighbor classifier. The accuracy up to 98.584% has been obtained when using 4 samples per class for training. The experimental results demonstrate that our proposed approach is feasible and effective in biometrics.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27058283','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27058283"><span>Medical image classification using spatial adjacent histogram based on adaptive local binary patterns.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Liu, Dong; Wang, Shengsheng; Huang, Dezhi; Deng, Gang; Zeng, Fantao; Chen, Huiling</p> <p>2016-05-01</p> <p>Medical image recognition is an important task in both computer vision and computational biology. In the field of medical image classification, representing an image based on local binary patterns (LBP) descriptor has become popular. However, most existing LBP-based methods encode the binary patterns in a fixed neighborhood radius and ignore the spatial relationships among local patterns. The ignoring of the spatial relationships in the LBP will cause a poor performance in the process of capturing discriminative features for complex samples, such as medical images obtained by microscope. To address this problem, in this paper we propose a novel method to improve local binary patterns by assigning an adaptive neighborhood radius for each pixel. Based on these adaptive local binary patterns, we further propose a spatial adjacent histogram strategy to encode the micro-structures for image representation. An extensive set of evaluations are performed on four medical datasets which show that the proposed method significantly improves standard LBP and compares favorably with several other prevailing approaches. Copyright © 2016 Elsevier Ltd. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014JPS...247..539R','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014JPS...247..539R"><span>Sensorless battery temperature measurements based on electrochemical impedance spectroscopy</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Raijmakers, L. H. J.; Danilov, D. L.; van Lammeren, J. P. M.; Lammers, M. J. G.; Notten, P. H. L.</p> <p>2014-02-01</p> <p>A new method is proposed to measure the internal temperature of (Li-ion) batteries. Based on electrochemical impedance spectroscopy measurements, an intercept frequency (f0) can be determined which is exclusively related to the internal battery temperature. The intercept frequency is defined as the frequency at which the imaginary part of the impedance is zero (Zim = 0), i.e. where the phase shift between the battery current and voltage is absent. The advantage of the proposed method is twofold: (i) no hardware temperature sensors are required anymore to monitor the battery temperature and (ii) the method does not suffer from heat transfer delays. Mathematical analysis of the equivalent electrical-circuit, representing the battery performance, confirms that the intercept frequency decreases with rising temperatures. Impedance measurements on rechargeable Li-ion cells of various chemistries were conducted to verify the proposed method. These experiments reveal that the intercept frequency is clearly dependent on the temperature and does not depend on State-of-Charge (SoC) and aging. These impedance-based sensorless temperature measurements are therefore simple and convenient for application in a wide range of stationary, mobile and high-power devices, such as hybrid- and full electric vehicles.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25781876','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25781876"><span>High-resolution face verification using pore-scale facial features.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Li, Dong; Zhou, Huiling; Lam, Kin-Man</p> <p>2015-08-01</p> <p>Face recognition methods, which usually represent face images using holistic or local facial features, rely heavily on alignment. Their performances also suffer a severe degradation under variations in expressions or poses, especially when there is one gallery per subject only. With the easy access to high-resolution (HR) face images nowadays, some HR face databases have recently been developed. However, few studies have tackled the use of HR information for face recognition or verification. In this paper, we propose a pose-invariant face-verification method, which is robust to alignment errors, using the HR information based on pore-scale facial features. A new keypoint descriptor, namely, pore-Principal Component Analysis (PCA)-Scale Invariant Feature Transform (PPCASIFT)-adapted from PCA-SIFT-is devised for the extraction of a compact set of distinctive pore-scale facial features. Having matched the pore-scale features of two-face regions, an effective robust-fitting scheme is proposed for the face-verification task. Experiments show that, with one frontal-view gallery only per subject, our proposed method outperforms a number of standard verification methods, and can achieve excellent accuracy even the faces are under large variations in expression and pose.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4773029','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4773029"><span>Progressive Label Fusion Framework for Multi-atlas Segmentation by Dictionary Evolution</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Song, Yantao; Wu, Guorong; Sun, Quansen; Bahrami, Khosro; Li, Chunming; Shen, Dinggang</p> <p>2015-01-01</p> <p>Accurate segmentation of anatomical structures in medical images is very important in neuroscience studies. Recently, multi-atlas patch-based label fusion methods have achieved many successes, which generally represent each target patch from an atlas patch dictionary in the image domain and then predict the latent label by directly applying the estimated representation coefficients in the label domain. However, due to the large gap between these two domains, the estimated representation coefficients in the image domain may not stay optimal for the label fusion. To overcome this dilemma, we propose a novel label fusion framework to make the weighting coefficients eventually to be optimal for the label fusion by progressively constructing a dynamic dictionary in a layer-by-layer manner, where a sequence of intermediate patch dictionaries gradually encode the transition from the patch representation coefficients in image domain to the optimal weights for label fusion. Our proposed framework is general to augment the label fusion performance of the current state-of-the-art methods. In our experiments, we apply our proposed method to hippocampus segmentation on ADNI dataset and achieve more accurate labeling results, compared to the counterpart methods with single-layer dictionary. PMID:26942233</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26942233','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26942233"><span>Progressive Label Fusion Framework for Multi-atlas Segmentation by Dictionary Evolution.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Song, Yantao; Wu, Guorong; Sun, Quansen; Bahrami, Khosro; Li, Chunming; Shen, Dinggang</p> <p>2015-10-01</p> <p>Accurate segmentation of anatomical structures in medical images is very important in neuroscience studies. Recently, multi-atlas patch-based label fusion methods have achieved many successes, which generally represent each target patch from an atlas patch dictionary in the image domain and then predict the latent label by directly applying the estimated representation coefficients in the label domain. However, due to the large gap between these two domains, the estimated representation coefficients in the image domain may not stay optimal for the label fusion. To overcome this dilemma, we propose a novel label fusion framework to make the weighting coefficients eventually to be optimal for the label fusion by progressively constructing a dynamic dictionary in a layer-by-layer manner, where a sequence of intermediate patch dictionaries gradually encode the transition from the patch representation coefficients in image domain to the optimal weights for label fusion. Our proposed framework is general to augment the label fusion performance of the current state-of-the-art methods. In our experiments, we apply our proposed method to hippocampus segmentation on ADNI dataset and achieve more accurate labeling results, compared to the counterpart methods with single-layer dictionary.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2008MCM....44..309A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2008MCM....44..309A"><span>Design equations for the assessment and FRP-strengthening of reinforced rectangular concrete columns under combined biaxial bending and axial loads</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Alessandri, S.; Monti, G.</p> <p>2008-05-01</p> <p>A simple procedure is proposed for the assessment of reinforced rectangular concrete columns under combined biaxial bending and axial loads and for the design of a correct amount of FRP-strengthening for underdesigned concrete sections. Approximate closed-form equations are developed based on the load contour method originally proposed by Bresler for reinforced concrete sections. The 3D failure surface is approximated along its contours, at a constant axial load, by means of equations given as the sum of the acting/resisting moment ratio in the directions of principal axes of the sections, raised to a power depending on the axial load, the steel reinforcement ratio, and the section shape. The method is extended to FRP-strengthened sections. Moreover, to make it possible to apply the load contour method in a more practical way, simple closed-form equations are developed for rectangular reinforced concrete sections with a two-way steel reinforcement and FRP strengthenings on each side. A comparison between the approach proposed and the fiber method (which is considered exact) shows that the simplified equations correctly represent the section interaction diagram.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017RMRE...50.2119D','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017RMRE...50.2119D"><span>Reliability-Based Stability Analysis of Rock Slopes Using Numerical Analysis and Response Surface Method</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Dadashzadeh, N.; Duzgun, H. S. B.; Yesiloglu-Gultekin, N.</p> <p>2017-08-01</p> <p>While advanced numerical techniques in slope stability analysis are successfully used in deterministic studies, they have so far found limited use in probabilistic analyses due to their high computation cost. The first-order reliability method (FORM) is one of the most efficient probabilistic techniques to perform probabilistic stability analysis by considering the associated uncertainties in the analysis parameters. However, it is not possible to directly use FORM in numerical slope stability evaluations as it requires definition of a limit state performance function. In this study, an integrated methodology for probabilistic numerical modeling of rock slope stability is proposed. The methodology is based on response surface method, where FORM is used to develop an explicit performance function from the results of numerical simulations. The implementation of the proposed methodology is performed by considering a large potential rock wedge in Sumela Monastery, Turkey. The accuracy of the developed performance function to truly represent the limit state surface is evaluated by monitoring the slope behavior. The calculated probability of failure is compared with Monte Carlo simulation (MCS) method. The proposed methodology is found to be 72% more efficient than MCS, while the accuracy is decreased with an error of 24%.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li class="active"><span>21</span></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_21 --> <div id="page_22" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li class="active"><span>22</span></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="421"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016JARS...10d6014Z','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016JARS...10d6014Z"><span>Reweighted mass center based object-oriented sparse subspace clustering for hyperspectral images</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Zhai, Han; Zhang, Hongyan; Zhang, Liangpei; Li, Pingxiang</p> <p>2016-10-01</p> <p>Considering the inevitable obstacles faced by the pixel-based clustering methods, such as salt-and-pepper noise, high computational complexity, and the lack of spatial information, a reweighted mass center based object-oriented sparse subspace clustering (RMC-OOSSC) algorithm for hyperspectral images (HSIs) is proposed. First, the mean-shift segmentation method is utilized to oversegment the HSI to obtain meaningful objects. Second, a distance reweighted mass center learning model is presented to extract the representative and discriminative features for each object. Third, assuming that all the objects are sampled from a union of subspaces, it is natural to apply the SSC algorithm to the HSI. Faced with the high correlation among the hyperspectral objects, a weighting scheme is adopted to ensure that the highly correlated objects are preferred in the procedure of sparse representation, to reduce the representation errors. Two widely used hyperspectral datasets were utilized to test the performance of the proposed RMC-OOSSC algorithm, obtaining high clustering accuracies (overall accuracy) of 71.98% and 89.57%, respectively. The experimental results show that the proposed method clearly improves the clustering performance with respect to the other state-of-the-art clustering methods, and it significantly reduces the computational time.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28108373','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28108373"><span>Left ventricle segmentation via two-layer level sets with circular shape constraint.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Yang, Cong; Wu, Weiguo; Su, Yuanqi; Zhang, Shaoxiang</p> <p>2017-05-01</p> <p>This paper proposes a circular shape constraint and a novel two-layer level set method for the segmentation of the left ventricle (LV) from short-axis magnetic resonance images without training any shape models. Since the shape of LV throughout the apex-base axis is close to a ring shape, we propose a circle fitting term in the level set framework to detect the endocardium. The circle fitting term imposes a penalty on the evolving contour from its fitting circle, and thereby handles quite well with issues in LV segmentation, especially the presence of outflow track in basal slices and the intensity overlap between TPM and the myocardium. To extract the whole myocardium, the circle fitting term is incorporated into two-layer level set method. The endocardium and epicardium are respectively represented by two specified level contours of the level set function, which are evolved by an edge-based and a region-based active contour model. The proposed method has been quantitatively validated on the public data set from MICCAI 2009 challenge on the LV segmentation. Experimental results and comparisons with state-of-the-art demonstrate the accuracy and robustness of our method. Copyright © 2017 Elsevier Inc. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26440445','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26440445"><span>Detecting glaucomatous change in visual fields: Analysis with an optimization framework.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Yousefi, Siamak; Goldbaum, Michael H; Varnousfaderani, Ehsan S; Belghith, Akram; Jung, Tzyy-Ping; Medeiros, Felipe A; Zangwill, Linda M; Weinreb, Robert N; Liebmann, Jeffrey M; Girkin, Christopher A; Bowd, Christopher</p> <p>2015-12-01</p> <p>Detecting glaucomatous progression is an important aspect of glaucoma management. The assessment of longitudinal series of visual fields, measured using Standard Automated Perimetry (SAP), is considered the reference standard for this effort. We seek efficient techniques for determining progression from longitudinal visual fields by formulating the problem as an optimization framework, learned from a population of glaucoma data. The longitudinal data from each patient's eye were used in a convex optimization framework to find a vector that is representative of the progression direction of the sample population, as a whole. Post-hoc analysis of longitudinal visual fields across the derived vector led to optimal progression (change) detection. The proposed method was compared to recently described progression detection methods and to linear regression of instrument-defined global indices, and showed slightly higher sensitivities at the highest specificities than other methods (a clinically desirable result). The proposed approach is simpler, faster, and more efficient for detecting glaucomatous changes, compared to our previously proposed machine learning-based methods, although it provides somewhat less information. This approach has potential application in glaucoma clinics for patient monitoring and in research centers for classification of study participants. Copyright © 2015 Elsevier Inc. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018ISPAr42.3..539H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018ISPAr42.3..539H"><span>Novel Hyperspectral Anomaly Detection Methods Based on Unsupervised Nearest Regularized Subspace</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Hou, Z.; Chen, Y.; Tan, K.; Du, P.</p> <p>2018-04-01</p> <p>Anomaly detection has been of great interest in hyperspectral imagery analysis. Most conventional anomaly detectors merely take advantage of spectral and spatial information within neighboring pixels. In this paper, two methods of Unsupervised Nearest Regularized Subspace-based with Outlier Removal Anomaly Detector (UNRSORAD) and Local Summation UNRSORAD (LSUNRSORAD) are proposed, which are based on the concept that each pixel in background can be approximately represented by its spatial neighborhoods, while anomalies cannot. Using a dual window, an approximation of each testing pixel is a representation of surrounding data via a linear combination. The existence of outliers in the dual window will affect detection accuracy. Proposed detectors remove outlier pixels that are significantly different from majority of pixels. In order to make full use of various local spatial distributions information with the neighboring pixels of the pixels under test, we take the local summation dual-window sliding strategy. The residual image is constituted by subtracting the predicted background from the original hyperspectral imagery, and anomalies can be detected in the residual image. Experimental results show that the proposed methods have greatly improved the detection accuracy compared with other traditional detection method.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29772850','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29772850"><span>A Framework of Covariance Projection on Constraint Manifold for Data Fusion.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Bakr, Muhammad Abu; Lee, Sukhan</p> <p>2018-05-17</p> <p>A general framework of data fusion is presented based on projecting the probability distribution of true states and measurements around the predicted states and actual measurements onto the constraint manifold. The constraint manifold represents the constraints to be satisfied among true states and measurements, which is defined in the extended space with all the redundant sources of data such as state predictions and measurements considered as independent variables. By the general framework, we mean that it is able to fuse any correlated data sources while directly incorporating constraints and identifying inconsistent data without any prior information. The proposed method, referred to here as the Covariance Projection (CP) method, provides an unbiased and optimal solution in the sense of minimum mean square error (MMSE), if the projection is based on the minimum weighted distance on the constraint manifold. The proposed method not only offers a generalization of the conventional formula for handling constraints and data inconsistency, but also provides a new insight into data fusion in terms of a geometric-algebraic point of view. Simulation results are provided to show the effectiveness of the proposed method in handling constraints and data inconsistency.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2009JAMDS...3..378K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2009JAMDS...3..378K"><span>Discrete State Change Model of Manufacturing Quality to Aid Assembly Process Design</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Koga, Tsuyoshi; Aoyama, Kazuhiro</p> <p></p> <p>This paper proposes a representation model of the quality state change in an assembly process that can be used in a computer-aided process design system. In order to formalize the state change of the manufacturing quality in the assembly process, the functions, operations, and quality changes in the assembly process are represented as a network model that can simulate discrete events. This paper also develops a design method for the assembly process. The design method calculates the space of quality state change and outputs a better assembly process (better operations and better sequences) that can be used to obtain the intended quality state of the final product. A computational redesigning algorithm of the assembly process that considers the manufacturing quality is developed. The proposed method can be used to design an improved manufacturing process by simulating the quality state change. A prototype system for planning an assembly process is implemented and applied to the design of an auto-breaker assembly process. The result of the design example indicates that the proposed assembly process planning method outputs a better manufacturing scenario based on the simulation of the quality state change.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28697740','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28697740"><span>Incorporating biological information in sparse principal component analysis with application to genomic data.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Li, Ziyi; Safo, Sandra E; Long, Qi</p> <p>2017-07-11</p> <p>Sparse principal component analysis (PCA) is a popular tool for dimensionality reduction, pattern recognition, and visualization of high dimensional data. It has been recognized that complex biological mechanisms occur through concerted relationships of multiple genes working in networks that are often represented by graphs. Recent work has shown that incorporating such biological information improves feature selection and prediction performance in regression analysis, but there has been limited work on extending this approach to PCA. In this article, we propose two new sparse PCA methods called Fused and Grouped sparse PCA that enable incorporation of prior biological information in variable selection. Our simulation studies suggest that, compared to existing sparse PCA methods, the proposed methods achieve higher sensitivity and specificity when the graph structure is correctly specified, and are fairly robust to misspecified graph structures. Application to a glioblastoma gene expression dataset identified pathways that are suggested in the literature to be related with glioblastoma. The proposed sparse PCA methods Fused and Grouped sparse PCA can effectively incorporate prior biological information in variable selection, leading to improved feature selection and more interpretable principal component loadings and potentially providing insights on molecular underpinnings of complex diseases.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29035333','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29035333"><span>Water Quality Sensing and Spatio-Temporal Monitoring Structure with Autocorrelation Kernel Methods.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Vizcaíno, Iván P; Carrera, Enrique V; Muñoz-Romero, Sergio; Cumbal, Luis H; Rojo-Álvarez, José Luis</p> <p>2017-10-16</p> <p>Pollution on water resources is usually analyzed with monitoring campaigns, which consist of programmed sampling, measurement, and recording of the most representative water quality parameters. These campaign measurements yields a non-uniform spatio-temporal sampled data structure to characterize complex dynamics phenomena. In this work, we propose an enhanced statistical interpolation method to provide water quality managers with statistically interpolated representations of spatial-temporal dynamics. Specifically, our proposal makes efficient use of the a priori available information of the quality parameter measurements through Support Vector Regression (SVR) based on Mercer's kernels. The methods are benchmarked against previously proposed methods in three segments of the Machángara River and one segment of the San Pedro River in Ecuador, and their different dynamics are shown by statistically interpolated spatial-temporal maps. The best interpolation performance in terms of mean absolute error was the SVR with Mercer's kernel given by either the Mahalanobis spatial-temporal covariance matrix or by the bivariate estimated autocorrelation function. In particular, the autocorrelation kernel provides with significant improvement of the estimation quality, consistently for all the six water quality variables, which points out the relevance of including a priori knowledge of the problem.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5677420','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5677420"><span>Water Quality Sensing and Spatio-Temporal Monitoring Structure with Autocorrelation Kernel Methods</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Vizcaíno, Iván P.; Muñoz-Romero, Sergio; Cumbal, Luis H.</p> <p>2017-01-01</p> <p>Pollution on water resources is usually analyzed with monitoring campaigns, which consist of programmed sampling, measurement, and recording of the most representative water quality parameters. These campaign measurements yields a non-uniform spatio-temporal sampled data structure to characterize complex dynamics phenomena. In this work, we propose an enhanced statistical interpolation method to provide water quality managers with statistically interpolated representations of spatial-temporal dynamics. Specifically, our proposal makes efficient use of the a priori available information of the quality parameter measurements through Support Vector Regression (SVR) based on Mercer’s kernels. The methods are benchmarked against previously proposed methods in three segments of the Machángara River and one segment of the San Pedro River in Ecuador, and their different dynamics are shown by statistically interpolated spatial-temporal maps. The best interpolation performance in terms of mean absolute error was the SVR with Mercer’s kernel given by either the Mahalanobis spatial-temporal covariance matrix or by the bivariate estimated autocorrelation function. In particular, the autocorrelation kernel provides with significant improvement of the estimation quality, consistently for all the six water quality variables, which points out the relevance of including a priori knowledge of the problem. PMID:29035333</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016SPIE.9897E..0PP','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016SPIE.9897E..0PP"><span>Automatic detection and classification of obstacles with applications in autonomous mobile robots</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Ponomaryov, Volodymyr I.; Rosas-Miranda, Dario I.</p> <p>2016-04-01</p> <p>Hardware implementation of an automatic detection and classification of objects that can represent an obstacle for an autonomous mobile robot using stereo vision algorithms is presented. We propose and evaluate a new method to detect and classify objects for a mobile robot in outdoor conditions. This method is divided in two parts, the first one is the object detection step based on the distance from the objects to the camera and a BLOB analysis. The second part is the classification step that is based on visuals primitives and a SVM classifier. The proposed method is performed in GPU in order to reduce the processing time values. This is performed with help of hardware based on multi-core processors and GPU platform, using a NVIDIA R GeForce R GT640 graphic card and Matlab over a PC with Windows 10.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28541227','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28541227"><span>Manifold Preserving: An Intrinsic Approach for Semisupervised Distance Metric Learning.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Ying, Shihui; Wen, Zhijie; Shi, Jun; Peng, Yaxin; Peng, Jigen; Qiao, Hong</p> <p>2017-05-18</p> <p>In this paper, we address the semisupervised distance metric learning problem and its applications in classification and image retrieval. First, we formulate a semisupervised distance metric learning model by considering the metric information of inner classes and interclasses. In this model, an adaptive parameter is designed to balance the inner metrics and intermetrics by using data structure. Second, we convert the model to a minimization problem whose variable is symmetric positive-definite matrix. Third, in implementation, we deduce an intrinsic steepest descent method, which assures that the metric matrix is strictly symmetric positive-definite at each iteration, with the manifold structure of the symmetric positive-definite matrix manifold. Finally, we test the proposed algorithm on conventional data sets, and compare it with other four representative methods. The numerical results validate that the proposed method significantly improves the classification with the same computational efficiency.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3947791','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3947791"><span>Interval Neutrosophic Sets and Their Application in Multicriteria Decision Making Problems</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Zhang, Hong-yu; Wang, Jian-qiang; Chen, Xiao-hong</p> <p>2014-01-01</p> <p>As a generalization of fuzzy sets and intuitionistic fuzzy sets, neutrosophic sets have been developed to represent uncertain, imprecise, incomplete, and inconsistent information existing in the real world. And interval neutrosophic sets (INSs) have been proposed exactly to address issues with a set of numbers in the real unit interval, not just a specific number. However, there are fewer reliable operations for INSs, as well as the INS aggregation operators and decision making method. For this purpose, the operations for INSs are defined and a comparison approach is put forward based on the related research of interval valued intuitionistic fuzzy sets (IVIFSs) in this paper. On the basis of the operations and comparison approach, two interval neutrosophic number aggregation operators are developed. Then, a method for multicriteria decision making problems is explored applying the aggregation operators. In addition, an example is provided to illustrate the application of the proposed method. PMID:24695916</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24695916','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24695916"><span>Interval neutrosophic sets and their application in multicriteria decision making problems.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Zhang, Hong-yu; Wang, Jian-qiang; Chen, Xiao-hong</p> <p>2014-01-01</p> <p>As a generalization of fuzzy sets and intuitionistic fuzzy sets, neutrosophic sets have been developed to represent uncertain, imprecise, incomplete, and inconsistent information existing in the real world. And interval neutrosophic sets (INSs) have been proposed exactly to address issues with a set of numbers in the real unit interval, not just a specific number. However, there are fewer reliable operations for INSs, as well as the INS aggregation operators and decision making method. For this purpose, the operations for INSs are defined and a comparison approach is put forward based on the related research of interval valued intuitionistic fuzzy sets (IVIFSs) in this paper. On the basis of the operations and comparison approach, two interval neutrosophic number aggregation operators are developed. Then, a method for multicriteria decision making problems is explored applying the aggregation operators. In addition, an example is provided to illustrate the application of the proposed method.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27918921','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27918921"><span>PrAS: Prediction of amidation sites using multiple feature extraction.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Wang, Tong; Zheng, Wei; Wuyun, Qiqige; Wu, Zhenfeng; Ruan, Jishou; Hu, Gang; Gao, Jianzhao</p> <p>2017-02-01</p> <p>Amidation plays an important role in a variety of pathological processes and serious diseases like neural dysfunction and hypertension. However, identification of protein amidation sites through traditional experimental methods is time consuming and expensive. In this paper, we proposed a novel predictor for Prediction of Amidation Sites (PrAS), which is the first software package for academic users. The method incorporated four representative feature types, which are position-based features, physicochemical and biochemical properties features, predicted structure-based features and evolutionary information features. A novel feature selection method, positive contribution feature selection was proposed to optimize features. PrAS achieved AUC of 0.96, accuracy of 92.1%, sensitivity of 81.2%, specificity of 94.9% and MCC of 0.76 on the independent test set. PrAS is freely available at https://sourceforge.net/p/praspkg. Copyright © 2016 Elsevier Ltd. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/22251820-estimating-nonrigid-motion-from-inconsistent-intensity-robust-shape-features','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/22251820-estimating-nonrigid-motion-from-inconsistent-intensity-robust-shape-features"><span>Estimating nonrigid motion from inconsistent intensity with robust shape features</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Liu, Wenyang; Ruan, Dan, E-mail: druan@mednet.ucla.edu; Department of Radiation Oncology, University of California, Los Angeles, California 90095</p> <p>2013-12-15</p> <p>Purpose: To develop a nonrigid motion estimation method that is robust to heterogeneous intensity inconsistencies amongst the image pairs or image sequence. Methods: Intensity and contrast variations, as in dynamic contrast enhanced magnetic resonance imaging, present a considerable challenge to registration methods based on general discrepancy metrics. In this study, the authors propose and validate a novel method that is robust to such variations by utilizing shape features. The geometry of interest (GOI) is represented with a flexible zero level set, segmented via well-behaved regularized optimization. The optimization energy drives the zero level set to high image gradient regions, andmore » regularizes it with area and curvature priors. The resulting shape exhibits high consistency even in the presence of intensity or contrast variations. Subsequently, a multiscale nonrigid registration is performed to seek a regular deformation field that minimizes shape discrepancy in the vicinity of GOIs. Results: To establish the working principle, realistic 2D and 3D images were subject to simulated nonrigid motion and synthetic intensity variations, so as to enable quantitative evaluation of registration performance. The proposed method was benchmarked against three alternative registration approaches, specifically, optical flow, B-spline based mutual information, and multimodality demons. When intensity consistency was satisfied, all methods had comparable registration accuracy for the GOIs. When intensities among registration pairs were inconsistent, however, the proposed method yielded pronounced improvement in registration accuracy, with an approximate fivefold reduction in mean absolute error (MAE = 2.25 mm, SD = 0.98 mm), compared to optical flow (MAE = 9.23 mm, SD = 5.36 mm), B-spline based mutual information (MAE = 9.57 mm, SD = 8.74 mm) and mutimodality demons (MAE = 10.07 mm, SD = 4.03 mm). Applying the proposed method on a real MR image sequence also provided qualitatively appealing results, demonstrating good feasibility and applicability of the proposed method. Conclusions: The authors have developed a novel method to estimate the nonrigid motion of GOIs in the presence of spatial intensity and contrast variations, taking advantage of robust shape features. Quantitative analysis and qualitative evaluation demonstrated good promise of the proposed method. Further clinical assessment and validation is being performed.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28123006','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28123006"><span>Conjoint representation of texture ensemble and location in the parahippocampal place area.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Park, Jeongho; Park, Soojin</p> <p>2017-04-01</p> <p>Texture provides crucial information about the category or identity of a scene. Nonetheless, not much is known about how the texture information in a scene is represented in the brain. Previous studies have shown that the parahippocampal place area (PPA), a scene-selective part of visual cortex, responds to simple patches of texture ensemble. However, in natural scenes textures exist in spatial context within a scene. Here we tested two hypotheses that make different predictions on how textures within a scene context are represented in the PPA. The Texture-Only hypothesis suggests that the PPA represents texture ensemble (i.e., the kind of texture) as is, irrespective of its location in the scene. On the other hand, the Texture and Location hypothesis suggests that the PPA represents texture and its location within a scene (e.g., ceiling or wall) conjointly. We tested these two hypotheses across two experiments, using different but complementary methods. In experiment 1 , by using multivoxel pattern analysis (MVPA) and representational similarity analysis, we found that the representational similarity of the PPA activation patterns was significantly explained by the Texture-Only hypothesis but not by the Texture and Location hypothesis. In experiment 2 , using a repetition suppression paradigm, we found no repetition suppression for scenes that had the same texture ensemble but differed in location (supporting the Texture and Location hypothesis). On the basis of these results, we propose a framework that reconciles contrasting results from MVPA and repetition suppression and draw conclusions about how texture is represented in the PPA. NEW & NOTEWORTHY This study investigates how the parahippocampal place area (PPA) represents texture information within a scene context. We claim that texture is represented in the PPA at multiple levels: the texture ensemble information at the across-voxel level and the conjoint information of texture and its location at the within-voxel level. The study proposes a working hypothesis that reconciles contrasting results from multivoxel pattern analysis and repetition suppression, suggesting that the methods are complementary to each other but not necessarily interchangeable. Copyright © 2017 the American Physiological Society.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/22653961-tu-ab-tumor-segmentation-fusion-multi-tracer-pet-images-using-copula-based-statistical-methods','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/22653961-tu-ab-tumor-segmentation-fusion-multi-tracer-pet-images-using-copula-based-statistical-methods"><span>TU-AB-202-11: Tumor Segmentation by Fusion of Multi-Tracer PET Images Using Copula Based Statistical Methods</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Lapuyade-Lahorgue, J; Ruan, S; Li, H</p> <p></p> <p>Purpose: Multi-tracer PET imaging is getting more attention in radiotherapy by providing additional tumor volume information such as glucose and oxygenation. However, automatic PET-based tumor segmentation is still a very challenging problem. We propose a statistical fusion approach to joint segment the sub-area of tumors from the two tracers FDG and FMISO PET images. Methods: Non-standardized Gamma distributions are convenient to model intensity distributions in PET. As a serious correlation exists in multi-tracer PET images, we proposed a new fusion method based on copula which is capable to represent dependency between different tracers. The Hidden Markov Field (HMF) model ismore » used to represent spatial relationship between PET image voxels and statistical dynamics of intensities for each modality. Real PET images of five patients with FDG and FMISO are used to evaluate quantitatively and qualitatively our method. A comparison between individual and multi-tracer segmentations was conducted to show advantages of the proposed fusion method. Results: The segmentation results show that fusion with Gaussian copula can receive high Dice coefficient of 0.84 compared to that of 0.54 and 0.3 of monomodal segmentation results based on individual segmentation of FDG and FMISO PET images. In addition, high correlation coefficients (0.75 to 0.91) for the Gaussian copula for all five testing patients indicates the dependency between tumor regions in the multi-tracer PET images. Conclusion: This study shows that using multi-tracer PET imaging can efficiently improve the segmentation of tumor region where hypoxia and glucidic consumption are present at the same time. Introduction of copulas for modeling the dependency between two tracers can simultaneously take into account information from both tracers and deal with two pathological phenomena. Future work will be to consider other families of copula such as spherical and archimedian copulas, and to eliminate partial volume effect by considering dependency between neighboring voxels.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/ADA057098','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/ADA057098"><span>Microbiology of Fresh Comminuted Turkey Meat</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.dtic.mil/">DTIC Science & Technology</a></p> <p></p> <p>1976-04-12</p> <p>823 Reprinted from J. Milk Food Technol. Vol. 39. No. 12. Pages 823-829 (December. 1976) Cooyright 1976, International Association of Milk , Food, and...most probable number (MPN) method (13). niacin, and riboflavin (29). Canadian officials have proposed microbial standards The literature contains...onto Mueller-Hinton agar plates containing 5% various retail markets in the San Francisco Bay Area. They were defibrinated sheep blood. Representative</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://files.eric.ed.gov/fulltext/ED033847.pdf','ERIC'); return false;" href="http://files.eric.ed.gov/fulltext/ED033847.pdf"><span>[Geometry Through Symmetry, Cambridge Conference on School Mathematics Feasibility Study No. 32.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Friedman, Bernard</p> <p></p> <p>These materials were written for the use of a class of eighth grade high ability students in a four week course sponsored by Educational Services Incorporated on the Stanford campus. They represent a practical response to the proposal by the Cambridge Conference of 1963 that geometry be taught by vector space methods. Instead of using vector…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=conjunct&pg=3&id=EJ570576','ERIC'); return false;" href="https://eric.ed.gov/?q=conjunct&pg=3&id=EJ570576"><span>A Symbolic Approach Using Feature Construction Capable of Acquiring Information/Knowledge for Building Expert Systems.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Major, Raymond L.</p> <p>1998-01-01</p> <p>Presents a technique for developing a knowledge-base of information to use in an expert system. Proposed approach employs a popular machine-learning algorithm along with a method for forming a finite number of features or conjuncts of at most n primitive attributes. Illustrates this procedure by examining qualitative information represented in a…</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li class="active"><span>22</span></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_22 --> <div id="page_23" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li class="active"><span>23</span></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="441"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28085812','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28085812"><span>Fast generation of computer-generated holograms using wavelet shrinkage.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Shimobaba, Tomoyoshi; Ito, Tomoyoshi</p> <p>2017-01-09</p> <p>Computer-generated holograms (CGHs) are generated by superimposing complex amplitudes emitted from a number of object points. However, this superposition process remains very time-consuming even when using the latest computers. We propose a fast calculation algorithm for CGHs that uses a wavelet shrinkage method, eliminating small wavelet coefficient values to express approximated complex amplitudes using only a few representative wavelet coefficients.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.fs.usda.gov/treesearch/pubs/42464','TREESEARCH'); return false;" href="https://www.fs.usda.gov/treesearch/pubs/42464"><span>Results of community deliberation about social impacts of ecological restoration: comparing public input of self-selected versus actively engaged community members</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.fs.usda.gov/treesearch/">Treesearch</a></p> <p>Charles C. Harris; Erik A. Nielsen; Dennis R. Becker; Dale J. Blahna; William J. McLaughlin</p> <p>2012-01-01</p> <p>Participatory processes for obtaining residents' input about community impacts of proposed environmental management actions have long raised concerns about who participates in public involvement efforts and whose interests they represent. This study explored methods of broad-based involvement and the role of deliberation in social impact assessment. Interactive...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26512675','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26512675"><span>An LPV Adaptive Observer for Updating a Map Applied to an MAF Sensor in a Diesel Engine.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Liu, Zhiyuan; Wang, Changhui</p> <p>2015-10-23</p> <p>In this paper, a new method for mass air flow (MAF) sensor error compensation and an online updating error map (or lookup table) due to installation and aging in a diesel engine is developed. Since the MAF sensor error is dependent on the engine operating point, the error model is represented as a two-dimensional (2D) map with two inputs, fuel mass injection quantity and engine speed. Meanwhile, the 2D map representing the MAF sensor error is described as a piecewise bilinear interpolation model, which can be written as a dot product between the regression vector and parameter vector using a membership function. With the combination of the 2D map regression model and the diesel engine air path system, an LPV adaptive observer with low computational load is designed to estimate states and parameters jointly. The convergence of the proposed algorithm is proven under the conditions of persistent excitation and given inequalities. The observer is validated against the simulation data from engine software enDYNA provided by Tesis. The results demonstrate that the operating point-dependent error of the MAF sensor can be approximated acceptably by the 2D map from the proposed method.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26529763','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26529763"><span>Large Margin Multi-Modal Multi-Task Feature Extraction for Image Classification.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Yong Luo; Yonggang Wen; Dacheng Tao; Jie Gui; Chao Xu</p> <p>2016-01-01</p> <p>The features used in many image analysis-based applications are frequently of very high dimension. Feature extraction offers several advantages in high-dimensional cases, and many recent studies have used multi-task feature extraction approaches, which often outperform single-task feature extraction approaches. However, most of these methods are limited in that they only consider data represented by a single type of feature, even though features usually represent images from multiple modalities. We, therefore, propose a novel large margin multi-modal multi-task feature extraction (LM3FE) framework for handling multi-modal features for image classification. In particular, LM3FE simultaneously learns the feature extraction matrix for each modality and the modality combination coefficients. In this way, LM3FE not only handles correlated and noisy features, but also utilizes the complementarity of different modalities to further help reduce feature redundancy in each modality. The large margin principle employed also helps to extract strongly predictive features, so that they are more suitable for prediction (e.g., classification). An alternating algorithm is developed for problem optimization, and each subproblem can be efficiently solved. Experiments on two challenging real-world image data sets demonstrate the effectiveness and superiority of the proposed method.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3948467','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3948467"><span>Link-Based Similarity Measures Using Reachability Vectors</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Yoon, Seok-Ho; Kim, Ji-Soo; Ryu, Minsoo; Choi, Ho-Jin</p> <p>2014-01-01</p> <p>We present a novel approach for computing link-based similarities among objects accurately by utilizing the link information pertaining to the objects involved. We discuss the problems with previous link-based similarity measures and propose a novel approach for computing link based similarities that does not suffer from these problems. In the proposed approach each target object is represented by a vector. Each element of the vector corresponds to all the objects in the given data, and the value of each element denotes the weight for the corresponding object. As for this weight value, we propose to utilize the probability of reaching from the target object to the specific object, computed using the “Random Walk with Restart” strategy. Then, we define the similarity between two objects as the cosine similarity of the two vectors. In this paper, we provide examples to show that our approach does not suffer from the aforementioned problems. We also evaluate the performance of the proposed methods in comparison with existing link-based measures, qualitatively and quantitatively, with respect to two kinds of data sets, scientific papers and Web documents. Our experimental results indicate that the proposed methods significantly outperform the existing measures. PMID:24701188</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017JEI....26f3019V','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017JEI....26f3019V"><span>Significance of perceptually relevant image decolorization for scene classification</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Viswanathan, Sowmya; Divakaran, Govind; Soman, Kutti Padanyl</p> <p>2017-11-01</p> <p>Color images contain luminance and chrominance components representing the intensity and color information, respectively. The objective of this paper is to show the significance of incorporating chrominance information to the task of scene classification. An improved color-to-grayscale image conversion algorithm that effectively incorporates chrominance information is proposed using the color-to-gray structure similarity index and singular value decomposition to improve the perceptual quality of the converted grayscale images. The experimental results based on an image quality assessment for image decolorization and its success rate (using the Cadik and COLOR250 datasets) show that the proposed image decolorization technique performs better than eight existing benchmark algorithms for image decolorization. In the second part of the paper, the effectiveness of incorporating the chrominance component for scene classification tasks is demonstrated using a deep belief network-based image classification system developed using dense scale-invariant feature transforms. The amount of chrominance information incorporated into the proposed image decolorization technique is confirmed with the improvement to the overall scene classification accuracy. Moreover, the overall scene classification performance improved by combining the models obtained using the proposed method and conventional decolorization methods.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012AIPC.1487...95W','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012AIPC.1487...95W"><span>Using analytic hierarchy process approach in ontological multicriterial decision making - Preliminary considerations</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Wasielewska, K.; Ganzha, M.</p> <p>2012-10-01</p> <p>In this paper we consider combining ontologically demarcated information with Saaty's Analytic Hierarchy Process (AHP) [1] for the multicriterial assessment of offers during contract negotiations. The context for the proposal is provided by the Agents in Grid project (AiG; [2]), which aims at development of an agent-based infrastructure for efficient resource management in the Grid. In the AiG project, software agents representing users can either (1) join a team and earn money, or (2) find a team to execute a job. Moreover, agents form teams, managers of which negotiate with clients and workers terms of potential collaboration. Here, ontologically described contracts (Service Level Agreements) are the results of autonomous multiround negotiations. Therefore, taking into account relatively complex nature of the negotiated contracts, multicriterial assessment of proposals plays a crucial role. The AHP method is based on pairwise comparisons of criteria and relies on the judgement of a panel of experts. It measures how well does an offer serve the objective of a decision maker. In this paper, we propose how the AHP method can be used to assess ontologically described contract proposals.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2006AGUFM.S52B..04D','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2006AGUFM.S52B..04D"><span>Toward the Reliability of Fault Representation Methods in Finite Difference Schemes for Simulation of Shear Rupture Propagation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Dalguer, L. A.; Day, S. M.</p> <p>2006-12-01</p> <p>Accuracy in finite difference (FD) solutions to spontaneous rupture problems is controlled principally by the scheme used to represent the fault discontinuity, and not by the grid geometry used to represent the continuum. We have numerically tested three fault representation methods, the Thick Fault (TF) proposed by Madariaga et al (1998), the Stress Glut (SG) described by Andrews (1999), and the Staggered-Grid Split-Node (SGSN) methods proposed by Dalguer and Day (2006), each implemented in a the fourth-order velocity-stress staggered-grid (VSSG) FD scheme. The TF and the SG methods approximate the discontinuity through inelastic increments to stress components ("inelastic-zone" schemes) at a set of stress grid points taken to lie on the fault plane. With this type of scheme, the fault surface is indistinguishable from an inelastic zone with a thickness given by a spatial step dx for the SG, and 2dx for the TF model. The SGSN method uses the traction-at-split-node (TSN) approach adapted to the VSSG FD. This method represents the fault discontinuity by explicitly incorporating discontinuity terms at velocity nodes in the grid, with interactions between the "split nodes" occurring exclusively through the tractions (frictional resistance) acting between them. These tractions in turn are controlled by the jump conditions and a friction law. Our 3D tests problem solutions show that the inelastic-zone TF and SG methods show much poorer performance than does the SGSN formulation. The SG inelastic-zone method achieved solutions that are qualitatively meaningful and quantitatively reliable to within a few percent. The TF inelastic-zone method did not achieve qualitatively agreement with the reference solutions to the 3D test problem, and proved to be sufficiently computationally inefficient that it was not feasible to explore convergence quantitatively. The SGSN method gives very accurate solutions, and is also very efficient. Reliable solution of the rupture time is reached with a median resolution of the cohesive zone of only ~2 grid points, and efficiency is competitive with the Boundary Integral (BI) method. The results presented here demonstrate that appropriate fault representation in a numerical scheme is crucial to reduce uncertainties in numerical simulations of earthquake source dynamics and ground motion, and therefore important to improving our understanding of earthquake physics in general.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19960020436','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19960020436"><span>Domain Decomposition Algorithms for First-Order System Least Squares Methods</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Pavarino, Luca F.</p> <p>1996-01-01</p> <p>Least squares methods based on first-order systems have been recently proposed and analyzed for second-order elliptic equations and systems. They produce symmetric and positive definite discrete systems by using standard finite element spaces, which are not required to satisfy the inf-sup condition. In this paper, several domain decomposition algorithms for these first-order least squares methods are studied. Some representative overlapping and substructuring algorithms are considered in their additive and multiplicative variants. The theoretical and numerical results obtained show that the classical convergence bounds (on the iteration operator) for standard Galerkin discretizations are also valid for least squares methods.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016ammi.conf..507L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016ammi.conf..507L"><span>Information Interaction Study for DER and DMS Interoperability</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Liu, Haitao; Lu, Yiming; Lv, Guangxian; Liu, Peng; Chen, Yu; Zhang, Xinhui</p> <p></p> <p>The Common Information Model (CIM) is an abstract data model that can be used to represent the major objects in Distribution Management System (DMS) applications. Because the Common Information Model (CIM) doesn't modeling the Distributed Energy Resources (DERs), it can't meet the requirements of DER operation and management for Distribution Management System (DMS) advanced applications. Modeling of DER were studied based on a system point of view, the article initially proposed a CIM extended information model. By analysis the basic structure of the message interaction between DMS and DER, a bidirectional messaging mapping method based on data exchange was proposed.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27352066','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27352066"><span>Dimeric Matrine-Type Alkaloids from the Roots of Sophora flavescens and Their Anti-Hepatitis B Virus Activities.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Zhang, Yu-Bo; Zhan, Li-Qin; Li, Guo-Qiang; Wang, Feng; Wang, Ying; Li, Yao-Lan; Ye, Wen-Cai; Wang, Guo-Cai</p> <p>2016-08-05</p> <p>Six unusual matrine-type alkaloid dimers, flavesines A-F (1-6, respectively), together with three proposed biosynthetic intermediates (7-9) were isolated from the roots of Sophora flavescens. Compounds 1-5 were the first natural matrine-type alkaloid dimers, and compound 6 represented an unprecedented dimerization pattern constructed by matrine and (-)-cytisine. Their structures were elucidated by NMR, MS, single-crystal X-ray diffraction, and a chemical method. The hypothetical biogenetic pathways of 1-6 were also proposed. Compounds 1-9 exhibited inhibitory activities against hepatitis B virus.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29576690','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29576690"><span>Deep imitation learning for 3D navigation tasks.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Hussein, Ahmed; Elyan, Eyad; Gaber, Mohamed Medhat; Jayne, Chrisina</p> <p>2018-01-01</p> <p>Deep learning techniques have shown success in learning from raw high-dimensional data in various applications. While deep reinforcement learning is recently gaining popularity as a method to train intelligent agents, utilizing deep learning in imitation learning has been scarcely explored. Imitation learning can be an efficient method to teach intelligent agents by providing a set of demonstrations to learn from. However, generalizing to situations that are not represented in the demonstrations can be challenging, especially in 3D environments. In this paper, we propose a deep imitation learning method to learn navigation tasks from demonstrations in a 3D environment. The supervised policy is refined using active learning in order to generalize to unseen situations. This approach is compared to two popular deep reinforcement learning techniques: deep-Q-networks and Asynchronous actor-critic (A3C). The proposed method as well as the reinforcement learning methods employ deep convolutional neural networks and learn directly from raw visual input. Methods for combining learning from demonstrations and experience are also investigated. This combination aims to join the generalization ability of learning by experience with the efficiency of learning by imitation. The proposed methods are evaluated on 4 navigation tasks in a 3D simulated environment. Navigation tasks are a typical problem that is relevant to many real applications. They pose the challenge of requiring demonstrations of long trajectories to reach the target and only providing delayed rewards (usually terminal) to the agent. The experiments show that the proposed method can successfully learn navigation tasks from raw visual input while learning from experience methods fail to learn an effective policy. Moreover, it is shown that active learning can significantly improve the performance of the initially learned policy using a small number of active samples.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017JCoPh.346...49S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017JCoPh.346...49S"><span>A transformed path integral approach for solution of the Fokker-Planck equation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Subramaniam, Gnana M.; Vedula, Prakash</p> <p>2017-10-01</p> <p>A novel path integral (PI) based method for solution of the Fokker-Planck equation is presented. The proposed method, termed the transformed path integral (TPI) method, utilizes a new formulation for the underlying short-time propagator to perform the evolution of the probability density function (PDF) in a transformed computational domain where a more accurate representation of the PDF can be ensured. The new formulation, based on a dynamic transformation of the original state space with the statistics of the PDF as parameters, preserves the non-negativity of the PDF and incorporates short-time properties of the underlying stochastic process. New update equations for the state PDF in a transformed space and the parameters of the transformation (including mean and covariance) that better accommodate nonlinearities in drift and non-Gaussian behavior in distributions are proposed (based on properties of the SDE). Owing to the choice of transformation considered, the proposed method maps a fixed grid in transformed space to a dynamically adaptive grid in the original state space. The TPI method, in contrast to conventional methods such as Monte Carlo simulations and fixed grid approaches, is able to better represent the distributions (especially the tail information) and better address challenges in processes with large diffusion, large drift and large concentration of PDF. Additionally, in the proposed TPI method, error bounds on the probability in the computational domain can be obtained using the Chebyshev's inequality. The benefits of the TPI method over conventional methods are illustrated through simulations of linear and nonlinear drift processes in one-dimensional and multidimensional state spaces. The effects of spatial and temporal grid resolutions as well as that of the diffusion coefficient on the error in the PDF are also characterized.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013AGUFM.H23G1360T','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013AGUFM.H23G1360T"><span>Quantification of frequency-components contributions to the discharge of a karst spring</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Taver, V.; Johannet, A.; Vinches, M.; Borrell, V.; Pistre, S.; Bertin, D.</p> <p>2013-12-01</p> <p>Karst aquifers represent important underground resources for water supplies, providing it to 25% of the population. Nevertheless such systems are currently underexploited because of their heterogeneity and complexity, which make work fields and physical measurements expensive, and frequently not representative of the whole aquifer. The systemic paradigm appears thus at a complementary approach to study and model karst aquifers in the framework of non-linear system analysis. Its input and output signals, namely rainfalls and discharge contain information about the function performed by the physical process. Therefore, improvement of knowledge about the karst system can be provided using time series analysis, for example Fourier analysis or orthogonal decomposition [1]. Another level of analysis consists in building non-linear models to identify rainfall/discharge relation, component by component [2]. In this context, this communication proposes to use neural networks to first model the rainfall-runoff relation using frequency components, and second to analyze the models, using the KnoX method [3], in order to quantify the importance of each component. Two different neural models were designed: (i) the recurrent model which implements a non-linear recurrent model fed by rainfalls, ETP and previous estimated discharge, (ii) the feed-forward model which implements a non-linear static model fed by rainfalls, ETP and previous observed discharges. The first model is known to better represent the rainfall-runoff relation; the second one to better predict the discharge based on previous discharge observations. KnoX method is based on a variable selection method, which simply considers values of parameters after the training without taking into account the non-linear behavior of the model during functioning. An amelioration of the KnoX method, is thus proposed in order to overcome this inadequacy. The proposed method, leads thus to both a hierarchization and a quantification of the input variables, here the frequency components, over output signal. Applied to the Lez karst aquifer, the combination of frequency decomposition and knowledge extraction improves knowledge on hydrological behavior. Both models and both extraction methods were applied and assessed using a fictitious reference model. Discussion is proposed in order to analyze efficiency of the methods compared to in situ measurements and tracing. [1] D. Labat et al. 'Rainfall-runoff relations for karst springs. Part II: continuous wavelet and discrete orthogonal multiresolution' In J of Hydrology, Vol. 238, 2000, pp. 149-178. [2] A. Johannet et al. 'Prediction of Lez Spring Discharge (Southern France) by Neural Networks using Orthogonal Wavelet Decomposition'.IJCNN Proceedings Brisbane 2012. [3] L. Kong A Siou et al. 'Modélisation hydrodynamique des karsts par réseaux de neurones : Comment dépasser la boîte noire. (Karst hydrodynamic modelling using artificial neural networks: how to surpass the black box ?)'. Proceedings of the 9th conference on limestone hydrogeology,2011 Besançon, France.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2003EJASP2003...26P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2003EJASP2003...26P"><span>Sinusoidal Analysis-Synthesis of Audio Using Perceptual Criteria</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Painter, Ted; Spanias, Andreas</p> <p>2003-12-01</p> <p>This paper presents a new method for the selection of sinusoidal components for use in compact representations of narrowband audio. The method consists of ranking and selecting the most perceptually relevant sinusoids. The idea behind the method is to maximize the matching between the auditory excitation pattern associated with the original signal and the corresponding auditory excitation pattern associated with the modeled signal that is being represented by a small set of sinusoidal parameters. The proposed component-selection methodology is shown to outperform the maximum signal-to-mask ratio selection strategy in terms of subjective quality.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1981ElL....17..397S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1981ElL....17..397S"><span>Recognition of isotropic plane target from RCS diagram</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Saillard, J.; Chassay, G.</p> <p>1981-06-01</p> <p>The use of electromagnetic waves for the recognition of a structure represented by point scatterers is seen as posing a fundamental problem. It is noted that much research has been done on this subject and that the study of aircraft observed in the yaw plane gives interesting results. To apply these methods, however, it is necessary to use many sophisticated acquisition systems. A method is proposed which can be applied to plane structures composed of isotropic scatterers. The method is considered to be of interest because it uses only power measurements and requires only a classical tracking radar.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018SPIE10615E..34Z','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018SPIE10615E..34Z"><span>Image fusion based on Bandelet and sparse representation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Zhang, Jiuxing; Zhang, Wei; Li, Xuzhi</p> <p>2018-04-01</p> <p>Bandelet transform could acquire geometric regular direction and geometric flow, sparse representation could represent signals with as little as possible atoms on over-complete dictionary, both of which could be used to image fusion. Therefore, a new fusion method is proposed based on Bandelet and Sparse Representation, to fuse Bandelet coefficients of multi-source images and obtain high quality fusion effects. The test are performed on remote sensing images and simulated multi-focus images, experimental results show that the performance of new method is better than tested methods according to objective evaluation indexes and subjective visual effects.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013SPIE.8658E..0BL','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013SPIE.8658E..0BL"><span>Comic image understanding based on polygon detection</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Li, Luyuan; Wang, Yongtao; Tang, Zhi; Liu, Dong</p> <p>2013-01-01</p> <p>Comic image understanding aims to automatically decompose scanned comic page images into storyboards and then identify the reading order of them, which is the key technique to produce digital comic documents that are suitable for reading on mobile devices. In this paper, we propose a novel comic image understanding method based on polygon detection. First, we segment a comic page images into storyboards by finding the polygonal enclosing box of each storyboard. Then, each storyboard can be represented by a polygon, and the reading order of them is determined by analyzing the relative geometric relationship between each pair of polygons. The proposed method is tested on 2000 comic images from ten printed comic series, and the experimental results demonstrate that it works well on different types of comic images.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2001SPIE.4555...71Y','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2001SPIE.4555...71Y"><span>Online graphic symbol recognition using neural network and ARG matching</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Yang, Bing; Li, Changhua; Xie, Weixing</p> <p>2001-09-01</p> <p>This paper proposes a novel method for on-line recognition of line-based graphic symbol. The input strokes are usually warped into a cursive form due to the sundry drawing style, and classifying them is very difficult. To deal with this, an ART-2 neural network is used to classify the input strokes. It has the advantages of high recognition rate, less recognition time and forming classes in a self-organized manner. The symbol recognition is achieved by an Attribute Relational Graph (ARG) matching algorithm. The ARG is very efficient for representing complex objects, but computation cost is very high. To over come this, we suggest a fast graph matching algorithm using symbol structure information. The experimental results show that the proposed method is effective for recognition of symbols with hierarchical structure.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26832518','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26832518"><span>Adiabatically tapered microstructured mode converter for selective excitation of the fundamental mode in a few mode fiber.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Taher, Aymen Belhadj; Di Bin, Philippe; Bahloul, Faouzi; Tartaret-Josnière, Etienne; Jossent, Mathieu; Février, Sébastien; Attia, Rabah</p> <p>2016-01-25</p> <p>We propose a new technique to selectively excite the fundamental mode in a few mode fiber (FMF). This method of excitation is made from a single mode fiber (SMF) which is inserted facing the FMF into an air-silica microstructured cane before the assembly is adiabatically tapered. We study theoretically and numerically this method by calculating the effective indices of the propagated modes, their amplitudes along the taper and the adiabaticity criteria, showing the ability to achieve an excellent selective excitation of the fundamental mode in the FMF with negligible loss. We experimentally demonstrate that the proposed solution provides a successful mode conversion and allows an almost excellent fundamental mode excitation in the FMF (representing 99.8% of the total power).</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li class="active"><span>23</span></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_23 --> <div id="page_24" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li class="active"><span>24</span></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="461"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018FrME...13..167H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018FrME...13..167H"><span>Evaluation of the power consumption of a high-speed parallel robot</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Han, Gang; Xie, Fugui; Liu, Xin-Jun</p> <p>2018-06-01</p> <p>An inverse dynamic model of a high-speed parallel robot is established based on the virtual work principle. With this dynamic model, a new evaluation method is proposed to measure the power consumption of the robot during pick-and-place tasks. The power vector is extended in this method and used to represent the collinear velocity and acceleration of the moving platform. Afterward, several dynamic performance indices, which are homogenous and possess obvious physical meanings, are proposed. These indices can evaluate the power input and output transmissibility of the robot in a workspace. The distributions of the power input and output transmissibility of the high-speed parallel robot are derived with these indices and clearly illustrated in atlases. Furtherly, a low-power-consumption workspace is selected for the robot.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/22572304-biorthogonal-decomposition-identification-simulation-non-stationary-non-gaussian-random-fields','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/22572304-biorthogonal-decomposition-identification-simulation-non-stationary-non-gaussian-random-fields"><span>A biorthogonal decomposition for the identification and simulation of non-stationary and non-Gaussian random fields</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Zentner, I.; Ferré, G., E-mail: gregoire.ferre@ponts.org; Poirion, F.</p> <p>2016-06-01</p> <p>In this paper, a new method for the identification and simulation of non-Gaussian and non-stationary stochastic fields given a database is proposed. It is based on two successive biorthogonal decompositions aiming at representing spatio–temporal stochastic fields. The proposed double expansion allows to build the model even in the case of large-size problems by separating the time, space and random parts of the field. A Gaussian kernel estimator is used to simulate the high dimensional set of random variables appearing in the decomposition. The capability of the method to reproduce the non-stationary and non-Gaussian features of random phenomena is illustrated bymore » applications to earthquakes (seismic ground motion) and sea states (wave heights).« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFMEP53B1744W','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFMEP53B1744W"><span>High-efficient Extraction of Drainage Networks from Digital Elevation Model Data Constrained by Enhanced Flow Enforcement from Known River Map</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Wu, T.; Li, T.; Li, J.; Wang, G.</p> <p>2017-12-01</p> <p>Improved drainage network extraction can be achieved by flow enforcement whereby information of known river maps is imposed to the flow-path modeling process. However, the common elevation-based stream burning method can sometimes cause unintended topological errors and misinterpret the overall drainage pattern. We presented an enhanced flow enforcement method to facilitate accurate and efficient process of drainage network extraction. Both the topology of the mapped hydrography and the initial landscape of the DEM are well preserved and fully utilized in the proposed method. An improved stream rasterization is achieved here, yielding continuous, unambiguous and stream-collision-free raster equivalent of stream vectors for flow enforcement. By imposing priority-based enforcement with a complementary flow direction enhancement procedure, the drainage patterns of the mapped hydrography are fully represented in the derived results. The proposed method was tested over the Rogue River Basin, using DEMs with various resolutions. As indicated by the visual and statistical analyses, the proposed method has three major advantages: (1) it significantly reduces the occurrences of topological errors, yielding very accurate watershed partition and channel delineation, (2) it ensures scale-consistent performance at DEMs of various resolutions, and (3) the entire extraction process is well-designed to achieve great computational efficiency.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4696924','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4696924"><span>Multiscale Reconstruction for Magnetic Resonance Fingerprinting</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Pierre, Eric Y.; Ma, Dan; Chen, Yong; Badve, Chaitra; Griswold, Mark A.</p> <p>2015-01-01</p> <p>Purpose To reduce acquisition time needed to obtain reliable parametric maps with Magnetic Resonance Fingerprinting. Methods An iterative-denoising algorithm is initialized by reconstructing the MRF image series at low image resolution. For subsequent iterations, the method enforces pixel-wise fidelity to the best-matching dictionary template then enforces fidelity to the acquired data at slightly higher spatial resolution. After convergence, parametric maps with desirable spatial resolution are obtained through template matching of the final image series. The proposed method was evaluated on phantom and in-vivo data using the highly-undersampled, variable-density spiral trajectory and compared with the original MRF method. The benefits of additional sparsity constraints were also evaluated. When available, gold standard parameter maps were used to quantify the performance of each method. Results The proposed approach allowed convergence to accurate parametric maps with as few as 300 time points of acquisition, as compared to 1000 in the original MRF work. Simultaneous quantification of T1, T2, proton density (PD) and B0 field variations in the brain was achieved in vivo for a 256×256 matrix for a total acquisition time of 10.2s, representing a 3-fold reduction in acquisition time. Conclusions The proposed iterative multiscale reconstruction reliably increases MRF acquisition speed and accuracy. PMID:26132462</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/529558-applying-simulation-model-uniform-field-space-charge-distribution-measurements-pea-method','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/529558-applying-simulation-model-uniform-field-space-charge-distribution-measurements-pea-method"><span>Applying simulation model to uniform field space charge distribution measurements by the PEA method</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Liu, Y.; Salama, M.M.A.</p> <p>1996-12-31</p> <p>Signals measured under uniform fields by the Pulsed Electroacoustic (PEA) method have been processed by the deconvolution procedure to obtain space charge distributions since 1988. To simplify data processing, a direct method has been proposed recently in which the deconvolution is eliminated. However, the surface charge cannot be represented well by the method because the surface charge has a bandwidth being from zero to infinity. The bandwidth of the charge distribution must be much narrower than the bandwidths of the PEA system transfer function in order to apply the direct method properly. When surface charges can not be distinguished frommore » space charge distributions, the accuracy and the resolution of the obtained space charge distributions decrease. To overcome this difficulty a simulation model is therefore proposed. This paper shows their attempts to apply the simulation model to obtain space charge distributions under plane-plane electrode configurations. Due to the page limitation for the paper, the charge distribution originated by the simulation model is compared to that obtained by the direct method with a set of simulated signals.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/5282676-improved-methods-distribution-loss-evaluation-volume-analytic-evaluative-techniques-final-report','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/5282676-improved-methods-distribution-loss-evaluation-volume-analytic-evaluative-techniques-final-report"><span>Improved methods for distribution loss evaluation. Volume 1: analytic and evaluative techniques. Final report</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Flinn, D.G.; Hall, S.; Morris, J.</p> <p></p> <p>This volume describes the background research, the application of the proposed loss evaluation techniques, and the results. The research identified present loss calculation methods as appropriate, provided care was taken to represent the various system elements in sufficient detail. The literature search of past methods and typical data revealed that extreme caution in using typical values (load factor, etc.) should be taken to ensure that all factors were referred to the same time base (daily, weekly, etc.). The performance of the method (and computer program) proposed in this project was determined by comparison of results with a rigorous evaluation ofmore » losses on the Salt River Project system. This rigorous evaluation used statistical modeling of the entire system as well as explicit enumeration of all substation and distribution transformers. Further tests were conducted at Public Service Electric and Gas of New Jersey to check the appropriateness of the methods in a northern environment. Finally sensitivity tests indicated data elements inaccuracy of which would most affect the determination of losses using the method developed in this project.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4957830','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4957830"><span>Using Tensor Completion Method to Achieving Better Coverage of Traffic State Estimation from Sparse Floating Car Data</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Ran, Bin; Song, Li; Cheng, Yang; Tan, Huachun</p> <p>2016-01-01</p> <p>Traffic state estimation from the floating car system is a challenging problem. The low penetration rate and random distribution make available floating car samples usually cover part space and time points of the road networks. To obtain a wide range of traffic state from the floating car system, many methods have been proposed to estimate the traffic state for the uncovered links. However, these methods cannot provide traffic state of the entire road networks. In this paper, the traffic state estimation is transformed to solve a missing data imputation problem, and the tensor completion framework is proposed to estimate missing traffic state. A tensor is constructed to model traffic state in which observed entries are directly derived from floating car system and unobserved traffic states are modeled as missing entries of constructed tensor. The constructed traffic state tensor can represent spatial and temporal correlations of traffic data and encode the multi-way properties of traffic state. The advantage of the proposed approach is that it can fully mine and utilize the multi-dimensional inherent correlations of traffic state. We tested the proposed approach on a well calibrated simulation network. Experimental results demonstrated that the proposed approach yield reliable traffic state estimation from very sparse floating car data, particularly when dealing with the floating car penetration rate is below 1%. PMID:27448326</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27448326','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27448326"><span>Using Tensor Completion Method to Achieving Better Coverage of Traffic State Estimation from Sparse Floating Car Data.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Ran, Bin; Song, Li; Zhang, Jian; Cheng, Yang; Tan, Huachun</p> <p>2016-01-01</p> <p>Traffic state estimation from the floating car system is a challenging problem. The low penetration rate and random distribution make available floating car samples usually cover part space and time points of the road networks. To obtain a wide range of traffic state from the floating car system, many methods have been proposed to estimate the traffic state for the uncovered links. However, these methods cannot provide traffic state of the entire road networks. In this paper, the traffic state estimation is transformed to solve a missing data imputation problem, and the tensor completion framework is proposed to estimate missing traffic state. A tensor is constructed to model traffic state in which observed entries are directly derived from floating car system and unobserved traffic states are modeled as missing entries of constructed tensor. The constructed traffic state tensor can represent spatial and temporal correlations of traffic data and encode the multi-way properties of traffic state. The advantage of the proposed approach is that it can fully mine and utilize the multi-dimensional inherent correlations of traffic state. We tested the proposed approach on a well calibrated simulation network. Experimental results demonstrated that the proposed approach yield reliable traffic state estimation from very sparse floating car data, particularly when dealing with the floating car penetration rate is below 1%.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5981442','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5981442"><span>Long-Term Deflection Prediction from Computer Vision-Measured Data History for High-Speed Railway Bridges</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Lee, Jaebeom; Lee, Young-Joo</p> <p>2018-01-01</p> <p>Management of the vertical long-term deflection of a high-speed railway bridge is a crucial factor to guarantee traffic safety and passenger comfort. Therefore, there have been efforts to predict the vertical deflection of a railway bridge based on physics-based models representing various influential factors to vertical deflection such as concrete creep and shrinkage. However, it is not an easy task because the vertical deflection of a railway bridge generally involves several sources of uncertainty. This paper proposes a probabilistic method that employs a Gaussian process to construct a model to predict the vertical deflection of a railway bridge based on actual vision-based measurement and temperature. To deal with the sources of uncertainty which may cause prediction errors, a Gaussian process is modeled with multiple kernels and hyperparameters. Once the hyperparameters are identified through the Gaussian process regression using training data, the proposed method provides a 95% prediction interval as well as a predictive mean about the vertical deflection of the bridge. The proposed method is applied to an arch bridge under operation for high-speed trains in South Korea. The analysis results obtained from the proposed method show good agreement with the actual measurement data on the vertical deflection of the example bridge, and the prediction results can be utilized for decision-making on railway bridge maintenance. PMID:29747421</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29747421','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29747421"><span>Long-Term Deflection Prediction from Computer Vision-Measured Data History for High-Speed Railway Bridges.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Lee, Jaebeom; Lee, Kyoung-Chan; Lee, Young-Joo</p> <p>2018-05-09</p> <p>Management of the vertical long-term deflection of a high-speed railway bridge is a crucial factor to guarantee traffic safety and passenger comfort. Therefore, there have been efforts to predict the vertical deflection of a railway bridge based on physics-based models representing various influential factors to vertical deflection such as concrete creep and shrinkage. However, it is not an easy task because the vertical deflection of a railway bridge generally involves several sources of uncertainty. This paper proposes a probabilistic method that employs a Gaussian process to construct a model to predict the vertical deflection of a railway bridge based on actual vision-based measurement and temperature. To deal with the sources of uncertainty which may cause prediction errors, a Gaussian process is modeled with multiple kernels and hyperparameters. Once the hyperparameters are identified through the Gaussian process regression using training data, the proposed method provides a 95% prediction interval as well as a predictive mean about the vertical deflection of the bridge. The proposed method is applied to an arch bridge under operation for high-speed trains in South Korea. The analysis results obtained from the proposed method show good agreement with the actual measurement data on the vertical deflection of the example bridge, and the prediction results can be utilized for decision-making on railway bridge maintenance.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24320523','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24320523"><span>Estimating nonrigid motion from inconsistent intensity with robust shape features.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Liu, Wenyang; Ruan, Dan</p> <p>2013-12-01</p> <p>To develop a nonrigid motion estimation method that is robust to heterogeneous intensity inconsistencies amongst the image pairs or image sequence. Intensity and contrast variations, as in dynamic contrast enhanced magnetic resonance imaging, present a considerable challenge to registration methods based on general discrepancy metrics. In this study, the authors propose and validate a novel method that is robust to such variations by utilizing shape features. The geometry of interest (GOI) is represented with a flexible zero level set, segmented via well-behaved regularized optimization. The optimization energy drives the zero level set to high image gradient regions, and regularizes it with area and curvature priors. The resulting shape exhibits high consistency even in the presence of intensity or contrast variations. Subsequently, a multiscale nonrigid registration is performed to seek a regular deformation field that minimizes shape discrepancy in the vicinity of GOIs. To establish the working principle, realistic 2D and 3D images were subject to simulated nonrigid motion and synthetic intensity variations, so as to enable quantitative evaluation of registration performance. The proposed method was benchmarked against three alternative registration approaches, specifically, optical flow, B-spline based mutual information, and multimodality demons. When intensity consistency was satisfied, all methods had comparable registration accuracy for the GOIs. When intensities among registration pairs were inconsistent, however, the proposed method yielded pronounced improvement in registration accuracy, with an approximate fivefold reduction in mean absolute error (MAE = 2.25 mm, SD = 0.98 mm), compared to optical flow (MAE = 9.23 mm, SD = 5.36 mm), B-spline based mutual information (MAE = 9.57 mm, SD = 8.74 mm) and mutimodality demons (MAE = 10.07 mm, SD = 4.03 mm). Applying the proposed method on a real MR image sequence also provided qualitatively appealing results, demonstrating good feasibility and applicability of the proposed method. The authors have developed a novel method to estimate the nonrigid motion of GOIs in the presence of spatial intensity and contrast variations, taking advantage of robust shape features. Quantitative analysis and qualitative evaluation demonstrated good promise of the proposed method. Further clinical assessment and validation is being performed.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018MSSP..105..169L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018MSSP..105..169L"><span>Chatter detection in milling process based on VMD and energy entropy</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Liu, Changfu; Zhu, Lida; Ni, Chenbing</p> <p>2018-05-01</p> <p>This paper presents a novel approach to detect the milling chatter based on Variational Mode Decomposition (VMD) and energy entropy. VMD has already been employed in feature extraction from non-stationary signals. The parameters like number of modes (K) and the quadratic penalty (α) need to be selected empirically when raw signal is decomposed by VMD. Aimed at solving the problem how to select K and α, the automatic selection method of VMD's based on kurtosis is proposed in this paper. When chatter occurs in the milling process, energy will be absorbed to chatter frequency bands. To detect the chatter frequency bands automatically, the chatter detection method based on energy entropy is presented. The vibration signal containing chatter frequency is simulated and three groups of experiments which represent three cutting conditions are conducted. To verify the effectiveness of method presented by this paper, chatter feather extraction has been successfully employed on simulation signals and experimental signals. The simulation and experimental results show that the proposed method can effectively detect the chatter.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015ISPAr.XL4...67L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015ISPAr.XL4...67L"><span>Research on Extension of Sparql Ontology Query Language Considering the Computation of Indoor Spatial Relations</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Li, C.; Zhu, X.; Guo, W.; Liu, Y.; Huang, H.</p> <p>2015-05-01</p> <p>A method suitable for indoor complex semantic query considering the computation of indoor spatial relations is provided According to the characteristics of indoor space. This paper designs ontology model describing the space related information of humans, events and Indoor space objects (e.g. Storey and Room) as well as their relations to meet the indoor semantic query. The ontology concepts are used in IndoorSPARQL query language which extends SPARQL syntax for representing and querying indoor space. And four types specific primitives for indoor query, "Adjacent", "Opposite", "Vertical" and "Contain", are defined as query functions in IndoorSPARQL used to support quantitative spatial computations. Also a method is proposed to analysis the query language. Finally this paper adopts this method to realize indoor semantic query on the study area through constructing the ontology model for the study building. The experimental results show that the method proposed in this paper can effectively support complex indoor space semantic query.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5621028','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5621028"><span>A Robust Inner and Outer Loop Control Method for Trajectory Tracking of a Quadrotor</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Xia, Dunzhu; Cheng, Limei; Yao, Yanhong</p> <p>2017-01-01</p> <p>In order to achieve the complicated trajectory tracking of quadrotor, a geometric inner and outer loop control scheme is presented. The outer loop generates the desired rotation matrix for the inner loop. To improve the response speed and robustness, a geometric SMC controller is designed for the inner loop. The outer loop is also designed via sliding mode control (SMC). By Lyapunov theory and cascade theory, the closed-loop system stability is guaranteed. Next, the tracking performance is validated by tracking three representative trajectories. Then, the robustness of the proposed control method is illustrated by trajectory tracking in presence of model uncertainty and disturbances. Subsequently, experiments are carried out to verify the method. In the experiment, ultra wideband (UWB) is used for indoor positioning. Extended Kalman Filter (EKF) is used for fusing inertial measurement unit (IMU) and UWB measurements. The experimental results show the feasibility of the designed controller in practice. The comparative experiments with PD and PD loop demonstrate the robustness of the proposed control method. PMID:28925984</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23999189','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23999189"><span>iMODFIT: efficient and robust flexible fitting based on vibrational analysis in internal coordinates.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Lopéz-Blanco, José Ramón; Chacón, Pablo</p> <p>2013-11-01</p> <p>Here, we employed the collective motions extracted from Normal Mode Analysis (NMA) in internal coordinates (torsional space) for the flexible fitting of atomic-resolution structures into electron microscopy (EM) density maps. The proposed methodology was validated using a benchmark of simulated cases, highlighting its robustness over the full range of EM resolutions and even over coarse-grained representations. A systematic comparison with other methods further showcased the advantages of this proposed methodology, especially at medium to lower resolutions. Using this method, computational costs and potential overfitting problems are naturally reduced by constraining the search in low-frequency NMA space, where covalent geometry is implicitly maintained. This method also effectively captures the macromolecular changes of a representative set of experimental test cases. We believe that this novel approach will extend the currently available EM hybrid methods to the atomic-level interpretation of large conformational changes and their functional implications. Copyright © 2013 Elsevier Inc. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26737721','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26737721"><span>Hot-spot selection and evaluation methods for whole slice images of meningiomas and oligodendrogliomas.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Swiderska, Zaneta; Markiewicz, Tomasz; Grala, Bartlomiej; Slodkowska, Janina</p> <p>2015-01-01</p> <p>The paper presents a combined method for an automatic hot-spot areas selection based on penalty factor in the whole slide images to support the pathomorphological diagnostic procedure. The studied slides represent the meningiomas and oligodendrogliomas tumor on the basis of the Ki-67/MIB-1 immunohistochemical reaction. It allows determining the tumor proliferation index as well as gives an indication to the medical treatment and prognosis. The combined method based on mathematical morphology, thresholding, texture analysis and classification is proposed and verified. The presented algorithm includes building a specimen map, elimination of hemorrhages from them, two methods for detection of hot-spot fields with respect to an introduced penalty factor. Furthermore, we propose localization concordance measure to evaluation localization of hot spot selection by the algorithms in respect to the expert's results. Thus, the results of the influence of the penalty factor are presented and discussed. It was found that the best results are obtained for 0.2 value of them. They confirm effectiveness of applied approach.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018JPhCS.960a2022F','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018JPhCS.960a2022F"><span>An Accurate Framework for Arbitrary View Pedestrian Detection in Images</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Fan, Y.; Wen, G.; Qiu, S.</p> <p>2018-01-01</p> <p>We consider the problem of detect pedestrian under from images collected under various viewpoints. This paper utilizes a novel framework called locality-constrained affine subspace coding (LASC). Firstly, the positive training samples are clustered into similar entities which represent similar viewpoint. Then Principal Component Analysis (PCA) is used to obtain the shared feature of each viewpoint. Finally, the samples that can be reconstructed by linear approximation using their top- k nearest shared feature with a small error are regarded as a correct detection. No negative samples are required for our method. Histograms of orientated gradient (HOG) features are used as the feature descriptors, and the sliding window scheme is adopted to detect humans in images. The proposed method exploits the sparse property of intrinsic information and the correlations among the multiple-views samples. Experimental results on the INRIA and SDL human datasets show that the proposed method achieves a higher performance than the state-of-the-art methods in form of effect and efficiency.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5870571','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5870571"><span>deBGR: an efficient and near-exact representation of the weighted de Bruijn graph</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Pandey, Prashant; Bender, Michael A.; Johnson, Rob; Patro, Rob</p> <p>2017-01-01</p> <p>Abstract Motivation: Almost all de novo short-read genome and transcriptome assemblers start by building a representation of the de Bruijn Graph of the reads they are given as input. Even when other approaches are used for subsequent assembly (e.g. when one is using ‘long read’ technologies like those offered by PacBio or Oxford Nanopore), efficient k-mer processing is still crucial for accurate assembly, and state-of-the-art long-read error-correction methods use de Bruijn Graphs. Because of the centrality of de Bruijn Graphs, researchers have proposed numerous methods for representing de Bruijn Graphs compactly. Some of these proposals sacrifice accuracy to save space. Further, none of these methods store abundance information, i.e. the number of times that each k-mer occurs, which is key in transcriptome assemblers. Results: We present a method for compactly representing the weighted de Bruijn Graph (i.e. with abundance information) with essentially no errors. Our representation yields zero errors while increasing the space requirements by less than 18–28% compared to the approximate de Bruijn graph representation in Squeakr. Our technique is based on a simple invariant that all weighted de Bruijn Graphs must satisfy, and hence is likely to be of general interest and applicable in most weighted de Bruijn Graph-based systems. Availability and implementation: https://github.com/splatlab/debgr. Contact: rob.patro@cs.stonybrook.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:28881995</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/22233563-pseudo-spectral-method-simulation-poro-elastic-seismic-wave-propagation-polar-coordinates-using-domain-decomposition','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/22233563-pseudo-spectral-method-simulation-poro-elastic-seismic-wave-propagation-polar-coordinates-using-domain-decomposition"><span></span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Sidler, Rolf, E-mail: rsidler@gmail.com; Carcione, José M.; Holliger, Klaus</p> <p></p> <p>We present a novel numerical approach for the comprehensive, flexible, and accurate simulation of poro-elastic wave propagation in 2D polar coordinates. An important application of this method and its extensions will be the modeling of complex seismic wave phenomena in fluid-filled boreholes, which represents a major, and as of yet largely unresolved, computational problem in exploration geophysics. In view of this, we consider a numerical mesh, which can be arbitrarily heterogeneous, consisting of two or more concentric rings representing the fluid in the center and the surrounding porous medium. The spatial discretization is based on a Chebyshev expansion in themore » radial direction and a Fourier expansion in the azimuthal direction and a Runge–Kutta integration scheme for the time evolution. A domain decomposition method is used to match the fluid–solid boundary conditions based on the method of characteristics. This multi-domain approach allows for significant reductions of the number of grid points in the azimuthal direction for the inner grid domain and thus for corresponding increases of the time step and enhancements of computational efficiency. The viability and accuracy of the proposed method has been rigorously tested and verified through comparisons with analytical solutions as well as with the results obtained with a corresponding, previously published, and independently benchmarked solution for 2D Cartesian coordinates. Finally, the proposed numerical solution also satisfies the reciprocity theorem, which indicates that the inherent singularity associated with the origin of the polar coordinate system is adequately handled.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018MSSP...98..382B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018MSSP...98..382B"><span>Obtaining manufactured geometries of deep-drawn components through a model updating procedure using geometric shape parameters</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Balla, Vamsi Krishna; Coox, Laurens; Deckers, Elke; Plyumers, Bert; Desmet, Wim; Marudachalam, Kannan</p> <p>2018-01-01</p> <p>The vibration response of a component or system can be predicted using the finite element method after ensuring numerical models represent realistic behaviour of the actual system under study. One of the methods to build high-fidelity finite element models is through a model updating procedure. In this work, a novel model updating method of deep-drawn components is demonstrated. Since the component is manufactured with a high draw ratio, significant deviations in both profile and thickness distributions occurred in the manufacturing process. A conventional model updating, involving Young's modulus, density and damping ratios, does not lead to a satisfactory match between simulated and experimental results. Hence a new model updating process is proposed, where geometry shape variables are incorporated, by carrying out morphing of the finite element model. This morphing process imitates the changes that occurred during the deep drawing process. An optimization procedure that uses the Global Response Surface Method (GRSM) algorithm to maximize diagonal terms of the Modal Assurance Criterion (MAC) matrix is presented. This optimization results in a more accurate finite element model. The advantage of the proposed methodology is that the CAD surface of the updated finite element model can be readily obtained after optimization. This CAD model can be used for carrying out analysis, as it represents the manufactured part more accurately. Hence, simulations performed using this updated model with an accurate geometry, will therefore yield more reliable results.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li class="active"><span>24</span></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_24 --> <div id="page_25" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li class="active"><span>25</span></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="481"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/22409519-poster-thur-eve-application-non-negative-matrix-factorization-technique-sup-dtbz-dynamic-pet-data-early-detection-parkinson-disease','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/22409519-poster-thur-eve-application-non-negative-matrix-factorization-technique-sup-dtbz-dynamic-pet-data-early-detection-parkinson-disease"><span>Poster — Thur Eve — 03: Application of the non-negative matrix factorization technique to [{sup 11}C]-DTBZ dynamic PET data for the early detection of Parkinson's disease</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Lee, Dong-Chang; Jans, Hans; McEwan, Sandy</p> <p>2014-08-15</p> <p>In this work, a class of non-negative matrix factorization (NMF) technique known as alternating non-negative least squares, combined with the projected gradient method, is used to analyze twenty-five [{sup 11}C]-DTBZ dynamic PET/CT brain data. For each subject, a two-factor model is assumed and two factors representing the striatum (factor 1) and the non-striatum (factor 2) tissues are extracted using the proposed NMF technique and commercially available factor analysis software “Pixies”. The extracted factor 1 and 2 curves represent the binding site of the radiotracer and describe the uptake and clearance of the radiotracer by soft tissues in the brain, respectively.more » The proposed NMF technique uses prior information about the dynamic data to obtain sample time-activity curves representing the striatum and the non-striatum tissues. These curves are then used for “warm” starting the optimization. Factor solutions from the two methods are compared graphically and quantitatively. In healthy subjects, radiotracer uptake by factors 1 and 2 are approximately 35–40% and 60–65%, respectively. The solutions are also used to develop a factor-based metric for the detection of early, untreated Parkinson's disease. The metric stratifies healthy subjects from suspected Parkinson's patients (based on the graphical method). The analysis shows that both techniques produce comparable results with similar computational time. The “semi-automatic” approach used by the NMF technique allows clinicians to manually set a starting condition for “warm” starting the optimization in order to facilitate control and efficient interaction with the data.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23766936','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23766936"><span>Automated classification of immunostaining patterns in breast tissue from the human protein atlas.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Swamidoss, Issac Niwas; Kårsnäs, Andreas; Uhlmann, Virginie; Ponnusamy, Palanisamy; Kampf, Caroline; Simonsson, Martin; Wählby, Carolina; Strand, Robin</p> <p>2013-01-01</p> <p>The Human Protein Atlas (HPA) is an effort to map the location of all human proteins (http://www.proteinatlas.org/). It contains a large number of histological images of sections from human tissue. Tissue micro arrays (TMA) are imaged by a slide scanning microscope, and each image represents a thin slice of a tissue core with a dark brown antibody specific stain and a blue counter stain. When generating antibodies for protein profiling of the human proteome, an important step in the quality control is to compare staining patterns of different antibodies directed towards the same protein. This comparison is an ultimate control that the antibody recognizes the right protein. In this paper, we propose and evaluate different approaches for classifying sub-cellular antibody staining patterns in breast tissue samples. The proposed methods include the computation of various features including gray level co-occurrence matrix (GLCM) features, complex wavelet co-occurrence matrix (CWCM) features, and weighted neighbor distance using compound hierarchy of algorithms representing morphology (WND-CHARM)-inspired features. The extracted features are used into two different multivariate classifiers (support vector machine (SVM) and linear discriminant analysis (LDA) classifier). Before extracting features, we use color deconvolution to separate different tissue components, such as the brownly stained positive regions and the blue cellular regions, in the immuno-stained TMA images of breast tissue. We present classification results based on combinations of feature measurements. The proposed complex wavelet features and the WND-CHARM features have accuracy similar to that of a human expert. Both human experts and the proposed automated methods have difficulties discriminating between nuclear and cytoplasmic staining patterns. This is to a large extent due to mixed staining of nucleus and cytoplasm. Methods for quantification of staining patterns in histopathology have many applications, ranging from antibody quality control to tumor grading.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28701967','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28701967"><span>A Novel Integrating Virtual Reality Approach for the Assessment of the Attachment Behavioral System.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Chicchi Giglioli, Irene Alice; Pravettoni, Gabriella; Sutil Martín, Dolores Lucia; Parra, Elena; Raya, Mariano A</p> <p>2017-01-01</p> <p>Virtual reality (VR) technology represents a novel and powerful tool for behavioral research in psychological assessment. VR provides simulated experiences able to create the sensation of undergoing real situations. Users become active participants in the virtual environment seeing, hearing, feeling, and actuating as if they were in the real world. Currently, the most psychological VR applications concern the treatment of various mental disorders but not the assessment, that it is mainly based on paper and pencil tests. The observation of behaviors is costly, labor-intensive, and it is hard to create social situations in laboratory settings, even if the observation of actual behaviors could be particularly informative. In this framework, social stressful experiences can activate various behaviors of attachment for a significant person that can help to control and soothe them to promote individual's well-being. Social support seeking, physical proximity, and positive and negative behaviors represent the main attachment behaviors that people can carry out during experiences of distress. We proposed VR as a novel integrating approach to measure real attachment behaviors. The first studies on attachment behavioral system by VR showed the potentiality of this approach. To improve the assessment during the VR experience, we proposed virtual stealth assessment (VSA) as a new method. VSA could represent a valid and novel technique to measure various psychological attributes in real-time during the virtual experience. The possible use of this method in psychology could be to generate a more complete, exhaustive, and accurate individual's psychological evaluation.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5713053','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5713053"><span>Feature Selection for Motor Imagery EEG Classification Based on Firefly Algorithm and Learning Automata</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Liu, Aiming; Liu, Quan; Ai, Qingsong; Xie, Yi; Chen, Anqi</p> <p>2017-01-01</p> <p>Motor Imagery (MI) electroencephalography (EEG) is widely studied for its non-invasiveness, easy availability, portability, and high temporal resolution. As for MI EEG signal processing, the high dimensions of features represent a research challenge. It is necessary to eliminate redundant features, which not only create an additional overhead of managing the space complexity, but also might include outliers, thereby reducing classification accuracy. The firefly algorithm (FA) can adaptively select the best subset of features, and improve classification accuracy. However, the FA is easily entrapped in a local optimum. To solve this problem, this paper proposes a method of combining the firefly algorithm and learning automata (LA) to optimize feature selection for motor imagery EEG. We employed a method of combining common spatial pattern (CSP) and local characteristic-scale decomposition (LCD) algorithms to obtain a high dimensional feature set, and classified it by using the spectral regression discriminant analysis (SRDA) classifier. Both the fourth brain–computer interface competition data and real-time data acquired in our designed experiments were used to verify the validation of the proposed method. Compared with genetic and adaptive weight particle swarm optimization algorithms, the experimental results show that our proposed method effectively eliminates redundant features, and improves the classification accuracy of MI EEG signals. In addition, a real-time brain–computer interface system was implemented to verify the feasibility of our proposed methods being applied in practical brain–computer interface systems. PMID:29117100</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29117100','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29117100"><span>Feature Selection for Motor Imagery EEG Classification Based on Firefly Algorithm and Learning Automata.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Liu, Aiming; Chen, Kun; Liu, Quan; Ai, Qingsong; Xie, Yi; Chen, Anqi</p> <p>2017-11-08</p> <p>Motor Imagery (MI) electroencephalography (EEG) is widely studied for its non-invasiveness, easy availability, portability, and high temporal resolution. As for MI EEG signal processing, the high dimensions of features represent a research challenge. It is necessary to eliminate redundant features, which not only create an additional overhead of managing the space complexity, but also might include outliers, thereby reducing classification accuracy. The firefly algorithm (FA) can adaptively select the best subset of features, and improve classification accuracy. However, the FA is easily entrapped in a local optimum. To solve this problem, this paper proposes a method of combining the firefly algorithm and learning automata (LA) to optimize feature selection for motor imagery EEG. We employed a method of combining common spatial pattern (CSP) and local characteristic-scale decomposition (LCD) algorithms to obtain a high dimensional feature set, and classified it by using the spectral regression discriminant analysis (SRDA) classifier. Both the fourth brain-computer interface competition data and real-time data acquired in our designed experiments were used to verify the validation of the proposed method. Compared with genetic and adaptive weight particle swarm optimization algorithms, the experimental results show that our proposed method effectively eliminates redundant features, and improves the classification accuracy of MI EEG signals. In addition, a real-time brain-computer interface system was implemented to verify the feasibility of our proposed methods being applied in practical brain-computer interface systems.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/20379077','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/20379077"><span>Determination method for nitromethane in workplace air.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Takeuchi, Akito; Nishimura, Yasuki; Kaifuku, Yuichiro; Imanaka, Tsutoshi; Natsumeda, Shuichiro; Ota, Hirokazu; Yamada, Shu; Kurotani, Ichiro; Sumino, Kimiaki; Kanno, Seiichiro</p> <p>2010-01-01</p> <p>The purpose of this research was to develop a determination method for nitromethane (NM) in workplace air for risk assessment. A suitable sampler and appropriate desorption condition were selected by a recovery test in which a spiked sampler was used. The characteristics of the proposed method, such as recovery, detection limit, and reproducibility, and the storage stability of the sample were examined. A sampling tube containing bead-shaped activated carbon was chosen as the sampler. NM in the sampler was desorbed with acetone and analyzed by a gas chromatograph equipped with a flame ionization detector. The recoveries of NM from the spiked sampler were 81-97% and 80-98% for personal exposure monitoring and working environment measurement, respectively. On the first day of storage in a refrigerator, the recovery from the spiked samplers exceeded 90%; however, it decreased dramatically with increasing storage time. In particular, the decrease was more remarkable for the smaller spiked amounts. The overall LOQ was 2 microg/sample. The relative standard deviation, which represents the overall reproducibility, was 1.1-4.0%. The proposed method enables 4-hour personal exposure monitoring of NM at concentrations equaling 0.001-2 times the threshold limit value-time-weighted average (TLV-TWA: 20 ppm) proposed by the American Conference of Governmental Industrial Hygienists, as well as 10-minute working environment measurement at concentrations equaling 0.02-2 times TLV-TWA. Thus, the proposed method will be useful for estimating worker exposure to NM.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4826600','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4826600"><span>Extraction of process zones and low-dimensional attractive subspaces in stochastic fracture mechanics</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Kerfriden, P.; Schmidt, K.M.; Rabczuk, T.; Bordas, S.P.A.</p> <p>2013-01-01</p> <p>We propose to identify process zones in heterogeneous materials by tailored statistical tools. The process zone is redefined as the part of the structure where the random process cannot be correctly approximated in a low-dimensional deterministic space. Such a low-dimensional space is obtained by a spectral analysis performed on pre-computed solution samples. A greedy algorithm is proposed to identify both process zone and low-dimensional representative subspace for the solution in the complementary region. In addition to the novelty of the tools proposed in this paper for the analysis of localised phenomena, we show that the reduced space generated by the method is a valid basis for the construction of a reduced order model. PMID:27069423</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23846514','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23846514"><span>An efficient variable projection formulation for separable nonlinear least squares problems.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Gan, Min; Li, Han-Xiong</p> <p>2014-05-01</p> <p>We consider in this paper a class of nonlinear least squares problems in which the model can be represented as a linear combination of nonlinear functions. The variable projection algorithm projects the linear parameters out of the problem, leaving the nonlinear least squares problems involving only the nonlinear parameters. To implement the variable projection algorithm more efficiently, we propose a new variable projection functional based on matrix decomposition. The advantage of the proposed formulation is that the size of the decomposed matrix may be much smaller than those of previous ones. The Levenberg-Marquardt algorithm using finite difference method is then applied to minimize the new criterion. Numerical results show that the proposed approach achieves significant reduction in computing time.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014IJSyS..45..472N','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014IJSyS..45..472N"><span>Decomposition of timed automata for solving scheduling problems</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Nishi, Tatsushi; Wakatake, Masato</p> <p>2014-03-01</p> <p>A decomposition algorithm for scheduling problems based on timed automata (TA) model is proposed. The problem is represented as an optimal state transition problem for TA. The model comprises of the parallel composition of submodels such as jobs and resources. The procedure of the proposed methodology can be divided into two steps. The first step is to decompose the TA model into several submodels by using decomposable condition. The second step is to combine individual solution of subproblems for the decomposed submodels by the penalty function method. A feasible solution for the entire model is derived through the iterated computation of solving the subproblem for each submodel. The proposed methodology is applied to solve flowshop and jobshop scheduling problems. Computational experiments demonstrate the effectiveness of the proposed algorithm compared with a conventional TA scheduling algorithm without decomposition.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20040129688','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20040129688"><span>Radiation Transport Around Axisymmetric Blunt Body Vehicles Using a Modified Differential Approximation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Hartung, Lin C.; Hassan, H. A.</p> <p>1992-01-01</p> <p>A moment method for computing 3-D radiative transport is applied to axisymmetric flows in thermochemical nonequilibrium. Such flows are representative of proposed aerobrake missions. The method uses the P-1 approximation to reduce the governing system of integro-di erential equations to a coupled set of partial di erential equations. A numerical solution method for these equations given actual variations of the radiation properties in thermochemical nonequilibrium blunt body flows is developed. Initial results from the method are shown and compared to tangent slab calculations. The agreement between the transport methods is found to be about 10 percent in the stagnation region, with the difference increasing along the flank of the vehicle.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/22391094-solution-second-order-quasi-linear-boundary-value-problems-wavelet-method','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/22391094-solution-second-order-quasi-linear-boundary-value-problems-wavelet-method"><span>Solution of second order quasi-linear boundary value problems by a wavelet method</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Zhang, Lei; Zhou, Youhe; Wang, Jizeng, E-mail: jzwang@lzu.edu.cn</p> <p>2015-03-10</p> <p>A wavelet Galerkin method based on expansions of Coiflet-like scaling function bases is applied to solve second order quasi-linear boundary value problems which represent a class of typical nonlinear differential equations. Two types of typical engineering problems are selected as test examples: one is about nonlinear heat conduction and the other is on bending of elastic beams. Numerical results are obtained by the proposed wavelet method. Through comparing to relevant analytical solutions as well as solutions obtained by other methods, we find that the method shows better efficiency and accuracy than several others, and the rate of convergence can evenmore » reach orders of 5.8.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29485659','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29485659"><span>Density functional theory studies on the solvent effects in Al(H2O)63+ water-exchange reactions: the number and arrangement of outer-sphere water molecules.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Liu, Li; Zhang, Jing; Dong, Shaonan; Zhang, Fuping; Wang, Ye; Bi, Shuping</p> <p>2018-03-07</p> <p>Density functional theory (DFT) calculations combined with cluster models are performed at the B3LYP/6-311+G(d,p) level for investigating the solvent effects in Al(H 2 O) 6 3+ water-exchange reactions. A "One-by-one" method is proposed to obtain the most representative number and arrangement of explicit H 2 Os in the second hydration sphere. First, all the possible ways to locate one explicit H 2 O in second sphere (N m ' = 1) based on the gas phase structure (N m ' = 0) are examined, and the optimal pathway (with the lowest energy barrier) for N m ' = 1 is determined. Next, more explicit H 2 Os are added one by one until the inner-sphere is fully hydrogen bonded. Finally, the optimal pathways with N m ' = 0-7 are obtained. The structural and energetic parameters as well as the lifetimes of the transition states are compared with the results obtained with the "Independent-minimum" method and the "Independent-average" method, and all three methods show that the pathway with N m ' = 6 may be representative. Our results give a new idea for finding the representative pathway for water-exchange reactions in other hydrated metal ion systems.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2010IEITC..93.2316K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2010IEITC..93.2316K"><span>Reliable Wireless Broadcast with Linear Network Coding for Multipoint-to-Multipoint Real-Time Communications</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kondo, Yoshihisa; Yomo, Hiroyuki; Yamaguchi, Shinji; Davis, Peter; Miura, Ryu; Obana, Sadao; Sampei, Seiichi</p> <p></p> <p>This paper proposes multipoint-to-multipoint (MPtoMP) real-time broadcast transmission using network coding for ad-hoc networks like video game networks. We aim to achieve highly reliable MPtoMP broadcasting using IEEE 802.11 media access control (MAC) that does not include a retransmission mechanism. When each node detects packets from the other nodes in a sequence, the correctly detected packets are network-encoded, and the encoded packet is broadcasted in the next sequence as a piggy-back for its native packet. To prevent increase of overhead in each packet due to piggy-back packet transmission, network coding vector for each node is exchanged between all nodes in the negotiation phase. Each user keeps using the same coding vector generated in the negotiation phase, and only coding information that represents which user signal is included in the network coding process is transmitted along with the piggy-back packet. Our simulation results show that the proposed method can provide higher reliability than other schemes using multi point relay (MPR) or redundant transmissions such as forward error correction (FEC). We also implement the proposed method in a wireless testbed, and show that the proposed method achieves high reliability in a real-world environment with a practical degree of complexity when installed on current wireless devices.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/21258747','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/21258747"><span>A robust high resolution reversed-phase HPLC strategy to investigate various metabolic species in different biological models.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>D'Alessandro, Angelo; Gevi, Federica; Zolla, Lello</p> <p>2011-04-01</p> <p>Recent advancements in the field of omics sciences have paved the way for further expansion of metabolomics. Originally tied to NMR spectroscopy, metabolomic disciplines are constantly and growingly involving HPLC and mass spectrometry (MS)-based analytical strategies and, in this context, we hereby propose a robust and efficient extraction protocol for metabolites from four different biological sources which are subsequently analysed, identified and quantified through high resolution reversed-phase fast HPLC and mass spectrometry. To this end, we demonstrate the elevated intra- and inter-day technical reproducibility, ease of an MRM-based MS method, allowing simultaneous detection of up to 10 distinct features, and robustness of multiple metabolite detection and quantification in four different biological samples. This strategy might become routinely applicable to various samples/biological matrices, especially for low-availability ones. In parallel, we compare the present strategy for targeted detection of a representative metabolite, L-glutamic acid, with our previously-proposed chemical-derivatization through dansyl chloride. A direct comparison of the present method against spectrophotometric assays is proposed as well. An application of the proposed method is also introduced, using the SAOS-2 cell line, either induced or non-induced to express the TAp63 isoform of the p63 gene, as a model for determination of variations of glutamate concentrations.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/22431526','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/22431526"><span>A novel Bayesian framework for discriminative feature extraction in Brain-Computer Interfaces.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Suk, Heung-Il; Lee, Seong-Whan</p> <p>2013-02-01</p> <p>As there has been a paradigm shift in the learning load from a human subject to a computer, machine learning has been considered as a useful tool for Brain-Computer Interfaces (BCIs). In this paper, we propose a novel Bayesian framework for discriminative feature extraction for motor imagery classification in an EEG-based BCI in which the class-discriminative frequency bands and the corresponding spatial filters are optimized by means of the probabilistic and information-theoretic approaches. In our framework, the problem of simultaneous spatiospectral filter optimization is formulated as the estimation of an unknown posterior probability density function (pdf) that represents the probability that a single-trial EEG of predefined mental tasks can be discriminated in a state. In order to estimate the posterior pdf, we propose a particle-based approximation method by extending a factored-sampling technique with a diffusion process. An information-theoretic observation model is also devised to measure discriminative power of features between classes. From the viewpoint of classifier design, the proposed method naturally allows us to construct a spectrally weighted label decision rule by linearly combining the outputs from multiple classifiers. We demonstrate the feasibility and effectiveness of the proposed method by analyzing the results and its success on three public databases.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28237929','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28237929"><span>Efficient Tensor Completion for Color Image and Video Recovery: Low-Rank Tensor Train.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Bengua, Johann A; Phien, Ho N; Tuan, Hoang Duong; Do, Minh N</p> <p>2017-05-01</p> <p>This paper proposes a novel approach to tensor completion, which recovers missing entries of data represented by tensors. The approach is based on the tensor train (TT) rank, which is able to capture hidden information from tensors thanks to its definition from a well-balanced matricization scheme. Accordingly, new optimization formulations for tensor completion are proposed as well as two new algorithms for their solution. The first one called simple low-rank tensor completion via TT (SiLRTC-TT) is intimately related to minimizing a nuclear norm based on TT rank. The second one is from a multilinear matrix factorization model to approximate the TT rank of a tensor, and is called tensor completion by parallel matrix factorization via TT (TMac-TT). A tensor augmentation scheme of transforming a low-order tensor to higher orders is also proposed to enhance the effectiveness of SiLRTC-TT and TMac-TT. Simulation results for color image and video recovery show the clear advantage of our method over all other methods.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018MS%26E..337a2020H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018MS%26E..337a2020H"><span>A modeling of dynamic storage assignment for order picking in beverage warehousing with Drive-in Rack system</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Hadi, M. Z.; Djatna, T.; Sugiarto</p> <p>2018-04-01</p> <p>This paper develops a dynamic storage assignment model to solve storage assignment problem (SAP) for beverages order picking in a drive-in rack warehousing system to determine the appropriate storage location and space for each beverage products dynamically so that the performance of the system can be improved. This study constructs a graph model to represent drive-in rack storage position then combine association rules mining, class-based storage policies and an arrangement rule algorithm to determine an appropriate storage location and arrangement of the product according to dynamic orders from customers. The performance of the proposed model is measured as rule adjacency accuracy, travel distance (for picking process) and probability a product become expiry using Last Come First Serve (LCFS) queue approach. Finally, the proposed model is implemented through computer simulation and compare the performance for different storage assignment methods as well. The result indicates that the proposed model outperforms other storage assignment methods.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013SPIE.9067E..0XM','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013SPIE.9067E..0XM"><span>A Kinect based sign language recognition system using spatio-temporal features</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Memiş, Abbas; Albayrak, Songül</p> <p>2013-12-01</p> <p>This paper presents a sign language recognition system that uses spatio-temporal features on RGB video images and depth maps for dynamic gestures of Turkish Sign Language. Proposed system uses motion differences and accumulation approach for temporal gesture analysis. Motion accumulation method, which is an effective method for temporal domain analysis of gestures, produces an accumulated motion image by combining differences of successive video frames. Then, 2D Discrete Cosine Transform (DCT) is applied to accumulated motion images and temporal domain features transformed into spatial domain. These processes are performed on both RGB images and depth maps separately. DCT coefficients that represent sign gestures are picked up via zigzag scanning and feature vectors are generated. In order to recognize sign gestures, K-Nearest Neighbor classifier with Manhattan distance is performed. Performance of the proposed sign language recognition system is evaluated on a sign database that contains 1002 isolated dynamic signs belongs to 111 words of Turkish Sign Language (TSL) in three different categories. Proposed sign language recognition system has promising success rates.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28481914','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28481914"><span>A modified belief entropy in Dempster-Shafer framework.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Zhou, Deyun; Tang, Yongchuan; Jiang, Wen</p> <p>2017-01-01</p> <p>How to quantify the uncertain information in the framework of Dempster-Shafer evidence theory is still an open issue. Quite a few uncertainty measures have been proposed in Dempster-Shafer framework, however, the existing studies mainly focus on the mass function itself, the available information represented by the scale of the frame of discernment (FOD) in the body of evidence is ignored. Without taking full advantage of the information in the body of evidence, the existing methods are somehow not that efficient. In this paper, a modified belief entropy is proposed by considering the scale of FOD and the relative scale of a focal element with respect to FOD. Inspired by Deng entropy, the new belief entropy is consistent with Shannon entropy in the sense of probability consistency. What's more, with less information loss, the new measure can overcome the shortage of some other uncertainty measures. A few numerical examples and a case study are presented to show the efficiency and superiority of the proposed method.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5421761','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5421761"><span>A modified belief entropy in Dempster-Shafer framework</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Zhou, Deyun; Jiang, Wen</p> <p>2017-01-01</p> <p>How to quantify the uncertain information in the framework of Dempster-Shafer evidence theory is still an open issue. Quite a few uncertainty measures have been proposed in Dempster-Shafer framework, however, the existing studies mainly focus on the mass function itself, the available information represented by the scale of the frame of discernment (FOD) in the body of evidence is ignored. Without taking full advantage of the information in the body of evidence, the existing methods are somehow not that efficient. In this paper, a modified belief entropy is proposed by considering the scale of FOD and the relative scale of a focal element with respect to FOD. Inspired by Deng entropy, the new belief entropy is consistent with Shannon entropy in the sense of probability consistency. What’s more, with less information loss, the new measure can overcome the shortage of some other uncertainty measures. A few numerical examples and a case study are presented to show the efficiency and superiority of the proposed method. PMID:28481914</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li class="active"><span>25</span></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_25 --> <div class="footer-extlink text-muted" style="margin-bottom:1rem; text-align:center;">Some links on this page may take you to non-federal websites. Their policies may differ from this site.</div> </div><!-- container --> <a id="backToTop" href="#top"> Top </a> <footer> <nav> <ul class="links"> <li><a href="/sitemap.html">Site Map</a></li> <li><a href="/website-policies.html">Website Policies</a></li> <li><a href="https://www.energy.gov/vulnerability-disclosure-policy" target="_blank">Vulnerability Disclosure Program</a></li> <li><a href="/contact.html">Contact Us</a></li> </ul> </nav> </footer> <script type="text/javascript"><!-- // var lastDiv = ""; function showDiv(divName) { // hide last div if (lastDiv) { document.getElementById(lastDiv).className = "hiddenDiv"; } //if value of the box is not nothing and an object with that name exists, then change the class if (divName && document.getElementById(divName)) { document.getElementById(divName).className = "visibleDiv"; lastDiv = divName; } } //--> </script> <script> /** * Function that tracks a click on an outbound link in Google Analytics. * This function takes a valid URL string as an argument, and uses that URL string * as the event label. */ var trackOutboundLink = function(url,collectionCode) { try { h = window.open(url); setTimeout(function() { ga('send', 'event', 'topic-page-click-through', collectionCode, url); }, 1000); } catch(err){} }; </script> <!-- Google Analytics --> <script> (function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){ (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o), m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m) })(window,document,'script','//www.google-analytics.com/analytics.js','ga'); ga('create', 'UA-1122789-34', 'auto'); ga('send', 'pageview'); </script> <!-- End Google Analytics --> <script> showDiv('page_1') </script> </body> </html>