Sample records for mixed over-complete dictionary

  1. Sparse Representation for Infrared Dim Target Detection via a Discriminative Over-Complete Dictionary Learned Online

    PubMed Central

    Li, Zheng-Zhou; Chen, Jing; Hou, Qian; Fu, Hong-Xia; Dai, Zhen; Jin, Gang; Li, Ru-Zhang; Liu, Chang-Ju

    2014-01-01

    It is difficult for structural over-complete dictionaries such as the Gabor function and discriminative over-complete dictionary, which are learned offline and classified manually, to represent natural images with the goal of ideal sparseness and to enhance the difference between background clutter and target signals. This paper proposes an infrared dim target detection approach based on sparse representation on a discriminative over-complete dictionary. An adaptive morphological over-complete dictionary is trained and constructed online according to the content of infrared image by K-singular value decomposition (K-SVD) algorithm. Then the adaptive morphological over-complete dictionary is divided automatically into a target over-complete dictionary describing target signals, and a background over-complete dictionary embedding background by the criteria that the atoms in the target over-complete dictionary could be decomposed more sparsely based on a Gaussian over-complete dictionary than the one in the background over-complete dictionary. This discriminative over-complete dictionary can not only capture significant features of background clutter and dim targets better than a structural over-complete dictionary, but also strengthens the sparse feature difference between background and target more efficiently than a discriminative over-complete dictionary learned offline and classified manually. The target and background clutter can be sparsely decomposed over their corresponding over-complete dictionaries, yet couldn't be sparsely decomposed based on their opposite over-complete dictionary, so their residuals after reconstruction by the prescribed number of target and background atoms differ very visibly. Some experiments are included and the results show that this proposed approach could not only improve the sparsity more efficiently, but also enhance the performance of small target detection more effectively. PMID:24871988

  2. Sparse representation for infrared Dim target detection via a discriminative over-complete dictionary learned online.

    PubMed

    Li, Zheng-Zhou; Chen, Jing; Hou, Qian; Fu, Hong-Xia; Dai, Zhen; Jin, Gang; Li, Ru-Zhang; Liu, Chang-Ju

    2014-05-27

    It is difficult for structural over-complete dictionaries such as the Gabor function and discriminative over-complete dictionary, which are learned offline and classified manually, to represent natural images with the goal of ideal sparseness and to enhance the difference between background clutter and target signals. This paper proposes an infrared dim target detection approach based on sparse representation on a discriminative over-complete dictionary. An adaptive morphological over-complete dictionary is trained and constructed online according to the content of infrared image by K-singular value decomposition (K-SVD) algorithm. Then the adaptive morphological over-complete dictionary is divided automatically into a target over-complete dictionary describing target signals, and a background over-complete dictionary embedding background by the criteria that the atoms in the target over-complete dictionary could be decomposed more sparsely based on a Gaussian over-complete dictionary than the one in the background over-complete dictionary. This discriminative over-complete dictionary can not only capture significant features of background clutter and dim targets better than a structural over-complete dictionary, but also strengthens the sparse feature difference between background and target more efficiently than a discriminative over-complete dictionary learned offline and classified manually. The target and background clutter can be sparsely decomposed over their corresponding over-complete dictionaries, yet couldn't be sparsely decomposed based on their opposite over-complete dictionary, so their residuals after reconstruction by the prescribed number of target and background atoms differ very visibly. Some experiments are included and the results show that this proposed approach could not only improve the sparsity more efficiently, but also enhance the performance of small target detection more effectively.

  3. Concordancers and Dictionaries as Problem-Solving Tools for ESL Academic Writing

    ERIC Educational Resources Information Center

    Yoon, Choongil

    2016-01-01

    The present study investigated how 6 Korean ESL graduate students in Canada used a suite of freely available reference resources, consisting of Web-based corpus tools, Google search engines, and dictionaries, for solving linguistic problems while completing an authentic academic writing assignment in English. Using a mixed methods design, the…

  4. Dictionary Approaches to Image Compression and Reconstruction

    NASA Technical Reports Server (NTRS)

    Ziyad, Nigel A.; Gilmore, Erwin T.; Chouikha, Mohamed F.

    1998-01-01

    This paper proposes using a collection of parameterized waveforms, known as a dictionary, for the purpose of medical image compression. These waveforms, denoted as lambda, are discrete time signals, where y represents the dictionary index. A dictionary with a collection of these waveforms Is typically complete or over complete. Given such a dictionary, the goal is to obtain a representation Image based on the dictionary. We examine the effectiveness of applying Basis Pursuit (BP), Best Orthogonal Basis (BOB), Matching Pursuits (MP), and the Method of Frames (MOF) methods for the compression of digitized radiological images with a wavelet-packet dictionary. The performance of these algorithms is studied for medical images with and without additive noise.

  5. Dictionary Learning Algorithms for Sparse Representation

    PubMed Central

    Kreutz-Delgado, Kenneth; Murray, Joseph F.; Rao, Bhaskar D.; Engan, Kjersti; Lee, Te-Won; Sejnowski, Terrence J.

    2010-01-01

    Algorithms for data-driven learning of domain-specific overcomplete dictionaries are developed to obtain maximum likelihood and maximum a posteriori dictionary estimates based on the use of Bayesian models with concave/Schur-concave (CSC) negative log priors. Such priors are appropriate for obtaining sparse representations of environmental signals within an appropriately chosen (environmentally matched) dictionary. The elements of the dictionary can be interpreted as concepts, features, or words capable of succinct expression of events encountered in the environment (the source of the measured signals). This is a generalization of vector quantization in that one is interested in a description involving a few dictionary entries (the proverbial “25 words or less”), but not necessarily as succinct as one entry. To learn an environmentally adapted dictionary capable of concise expression of signals generated by the environment, we develop algorithms that iterate between a representative set of sparse representations found by variants of FOCUSS and an update of the dictionary using these sparse representations. Experiments were performed using synthetic data and natural images. For complete dictionaries, we demonstrate that our algorithms have improved performance over other independent component analysis (ICA) methods, measured in terms of signal-to-noise ratios of separated sources. In the overcomplete case, we show that the true underlying dictionary and sparse sources can be accurately recovered. In tests with natural images, learned overcomplete dictionaries are shown to have higher coding efficiency than complete dictionaries; that is, images encoded with an over-complete dictionary have both higher compression (fewer bits per pixel) and higher accuracy (lower mean square error). PMID:12590811

  6. Aveiro method in reproducing kernel Hilbert spaces under complete dictionary

    NASA Astrophysics Data System (ADS)

    Mai, Weixiong; Qian, Tao

    2017-12-01

    Aveiro Method is a sparse representation method in reproducing kernel Hilbert spaces (RKHS) that gives orthogonal projections in linear combinations of reproducing kernels over uniqueness sets. It, however, suffers from determination of uniqueness sets in the underlying RKHS. In fact, in general spaces, uniqueness sets are not easy to be identified, let alone the convergence speed aspect with Aveiro Method. To avoid those difficulties we propose an anew Aveiro Method based on a dictionary and the matching pursuit idea. What we do, in fact, are more: The new Aveiro method will be in relation to the recently proposed, the so called Pre-Orthogonal Greedy Algorithm (P-OGA) involving completion of a given dictionary. The new method is called Aveiro Method Under Complete Dictionary (AMUCD). The complete dictionary consists of all directional derivatives of the underlying reproducing kernels. We show that, under the boundary vanishing condition, bring available for the classical Hardy and Paley-Wiener spaces, the complete dictionary enables an efficient expansion of any given element in the Hilbert space. The proposed method reveals new and advanced aspects in both the Aveiro Method and the greedy algorithm.

  7. Discriminative object tracking via sparse representation and online dictionary learning.

    PubMed

    Xie, Yuan; Zhang, Wensheng; Li, Cuihua; Lin, Shuyang; Qu, Yanyun; Zhang, Yinghua

    2014-04-01

    We propose a robust tracking algorithm based on local sparse coding with discriminative dictionary learning and new keypoint matching schema. This algorithm consists of two parts: the local sparse coding with online updated discriminative dictionary for tracking (SOD part), and the keypoint matching refinement for enhancing the tracking performance (KP part). In the SOD part, the local image patches of the target object and background are represented by their sparse codes using an over-complete discriminative dictionary. Such discriminative dictionary, which encodes the information of both the foreground and the background, may provide more discriminative power. Furthermore, in order to adapt the dictionary to the variation of the foreground and background during the tracking, an online learning method is employed to update the dictionary. The KP part utilizes refined keypoint matching schema to improve the performance of the SOD. With the help of sparse representation and online updated discriminative dictionary, the KP part are more robust than the traditional method to reject the incorrect matches and eliminate the outliers. The proposed method is embedded into a Bayesian inference framework for visual tracking. Experimental results on several challenging video sequences demonstrate the effectiveness and robustness of our approach.

  8. Effects of Printed, Pocket Electronic, and Online Dictionaries on High School Students' English Vocabulary Retention

    ERIC Educational Resources Information Center

    Chiu, Li-Ling; Liu, Gi-Zen

    2013-01-01

    This study obtained empirical evidence regarding the effects of using printed dictionaries (PD), pocket electronic dictionaries (PED), and online type-in dictionaries (OTID) on English vocabulary retention at a junior high school. A mixed-methods research methodology was adopted in this study. Thirty-three seventh graders were asked to use all…

  9. Infrared small target detection in heavy sky scene clutter based on sparse representation

    NASA Astrophysics Data System (ADS)

    Liu, Depeng; Li, Zhengzhou; Liu, Bing; Chen, Wenhao; Liu, Tianmei; Cao, Lei

    2017-09-01

    A novel infrared small target detection method based on sky clutter and target sparse representation is proposed in this paper to cope with the representing uncertainty of clutter and target. The sky scene background clutter is described by fractal random field, and it is perceived and eliminated via the sparse representation on fractal background over-complete dictionary (FBOD). The infrared small target signal is simulated by generalized Gaussian intensity model, and it is expressed by the generalized Gaussian target over-complete dictionary (GGTOD), which could describe small target more efficiently than traditional structured dictionaries. Infrared image is decomposed on the union of FBOD and GGTOD, and the sparse representation energy that target signal and background clutter decomposed on GGTOD differ so distinctly that it is adopted to distinguish target from clutter. Some experiments are induced and the experimental results show that the proposed approach could improve the small target detection performance especially under heavy clutter for background clutter could be efficiently perceived and suppressed by FBOD and the changing target could also be represented accurately by GGTOD.

  10. Definition and maintenance of a telemetry database dictionary

    NASA Technical Reports Server (NTRS)

    Knopf, William P. (Inventor)

    2007-01-01

    A telemetry dictionary database includes a component for receiving spreadsheet workbooks of telemetry data over a web-based interface from other computer devices. Another component routes the spreadsheet workbooks to a specified directory on the host processing device. A process then checks the received spreadsheet workbooks for errors, and if no errors are detected the spreadsheet workbooks are routed to another directory to await initiation of a remote database loading process. The loading process first converts the spreadsheet workbooks to comma separated value (CSV) files. Next, a network connection with the computer system that hosts the telemetry dictionary database is established and the CSV files are ported to the computer system that hosts the telemetry dictionary database. This is followed by a remote initiation of a database loading program. Upon completion of loading a flatfile generation program is manually initiated to generate a flatfile to be used in a mission operations environment by the core ground system.

  11. Corpora and Collocations in Chinese-English Dictionaries for Chinese Users

    ERIC Educational Resources Information Center

    Xia, Lixin

    2015-01-01

    The paper identifies the major problems of the Chinese-English dictionary in representing collocational information after an extensive survey of nine dictionaries popular among Chinese users. It is found that the Chinese-English dictionary only provides the collocation types of "v+n" and "v+n," but completely ignores those of…

  12. The extraction of spot signal in Shack-Hartmann wavefront sensor based on sparse representation

    NASA Astrophysics Data System (ADS)

    Zhang, Yanyan; Xu, Wentao; Chen, Suting; Ge, Junxiang; Wan, Fayu

    2016-07-01

    Several techniques have been used with Shack-Hartmann wavefront sensors to determine the local wave-front gradient across each lenslet. While the centroid error of Shack-Hartmann wavefront sensor is relatively large since the skylight background and the detector noise. In this paper, we introduce a new method based on sparse representation to extract the target signal from the background and the noise. First, an over complete dictionary of the spot signal is constructed based on two-dimensional Gaussian model. Then the Shack-Hartmann image is divided into sub blocks. The corresponding coefficients of each block is computed in the over complete dictionary. Since the coefficients of the noise and the target are large different, then extract the target by setting a threshold to the coefficients. Experimental results show that the target can be well extracted and the deviation, RMS and PV of the centroid are all smaller than the method of subtracting threshold.

  13. The Research on Denoising of SAR Image Based on Improved K-SVD Algorithm

    NASA Astrophysics Data System (ADS)

    Tan, Linglong; Li, Changkai; Wang, Yueqin

    2018-04-01

    SAR images often receive noise interference in the process of acquisition and transmission, which can greatly reduce the quality of images and cause great difficulties for image processing. The existing complete DCT dictionary algorithm is fast in processing speed, but its denoising effect is poor. In this paper, the problem of poor denoising, proposed K-SVD (K-means and singular value decomposition) algorithm is applied to the image noise suppression. Firstly, the sparse dictionary structure is introduced in detail. The dictionary has a compact representation and can effectively train the image signal. Then, the sparse dictionary is trained by K-SVD algorithm according to the sparse representation of the dictionary. The algorithm has more advantages in high dimensional data processing. Experimental results show that the proposed algorithm can remove the speckle noise more effectively than the complete DCT dictionary and retain the edge details better.

  14. Sparse Representation with Spatio-Temporal Online Dictionary Learning for Efficient Video Coding.

    PubMed

    Dai, Wenrui; Shen, Yangmei; Tang, Xin; Zou, Junni; Xiong, Hongkai; Chen, Chang Wen

    2016-07-27

    Classical dictionary learning methods for video coding suer from high computational complexity and interfered coding eciency by disregarding its underlying distribution. This paper proposes a spatio-temporal online dictionary learning (STOL) algorithm to speed up the convergence rate of dictionary learning with a guarantee of approximation error. The proposed algorithm incorporates stochastic gradient descents to form a dictionary of pairs of 3-D low-frequency and highfrequency spatio-temporal volumes. In each iteration of the learning process, it randomly selects one sample volume and updates the atoms of dictionary by minimizing the expected cost, rather than optimizes empirical cost over the complete training data like batch learning methods, e.g. K-SVD. Since the selected volumes are supposed to be i.i.d. samples from the underlying distribution, decomposition coecients attained from the trained dictionary are desirable for sparse representation. Theoretically, it is proved that the proposed STOL could achieve better approximation for sparse representation than K-SVD and maintain both structured sparsity and hierarchical sparsity. It is shown to outperform batch gradient descent methods (K-SVD) in the sense of convergence speed and computational complexity, and its upper bound for prediction error is asymptotically equal to the training error. With lower computational complexity, extensive experiments validate that the STOL based coding scheme achieves performance improvements than H.264/AVC or HEVC as well as existing super-resolution based methods in ratedistortion performance and visual quality.

  15. Classification of multiple sclerosis lesions using adaptive dictionary learning.

    PubMed

    Deshpande, Hrishikesh; Maurel, Pierre; Barillot, Christian

    2015-12-01

    This paper presents a sparse representation and an adaptive dictionary learning based method for automated classification of multiple sclerosis (MS) lesions in magnetic resonance (MR) images. Manual delineation of MS lesions is a time-consuming task, requiring neuroradiology experts to analyze huge volume of MR data. This, in addition to the high intra- and inter-observer variability necessitates the requirement of automated MS lesion classification methods. Among many image representation models and classification methods that can be used for such purpose, we investigate the use of sparse modeling. In the recent years, sparse representation has evolved as a tool in modeling data using a few basis elements of an over-complete dictionary and has found applications in many image processing tasks including classification. We propose a supervised classification approach by learning dictionaries specific to the lesions and individual healthy brain tissues, which include white matter (WM), gray matter (GM) and cerebrospinal fluid (CSF). The size of the dictionaries learned for each class plays a major role in data representation but it is an even more crucial element in the case of competitive classification. Our approach adapts the size of the dictionary for each class, depending on the complexity of the underlying data. The algorithm is validated using 52 multi-sequence MR images acquired from 13 MS patients. The results demonstrate the effectiveness of our approach in MS lesion classification. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. Trying Out a New Dictionary.

    ERIC Educational Resources Information Center

    Benson, Morton; Benson, Evelyn

    1988-01-01

    Describes the BBI Combinatory Dictionary of English and demonstrates its usefulness for advanced learners of English by administering a monolingual completion test, first without a dictionary and then with the BBI, to Hungarian and Russian English teachers. Both groups' scores improved dramatically on the posttest. (LMO)

  17. Monolingual Dictionary Use in an EFL Context

    ERIC Educational Resources Information Center

    Ali, Holi Ibrahim Holi

    2012-01-01

    Caledonian College of Engineering, Oman, has been encouraging its students to use monolingual dictionaries rather than bilingual or bilingualized ones in classroom and during the exams. This policy with has been received with mixed feelings and attitudes. Therefore, this study strives to explore teachers' and students' attitudes about the use of…

  18. Nonparametric, Coupled ,Bayesian ,Dictionary ,and Classifier Learning for Hyperspectral Classification.

    PubMed

    Akhtar, Naveed; Mian, Ajmal

    2017-10-03

    We present a principled approach to learn a discriminative dictionary along a linear classifier for hyperspectral classification. Our approach places Gaussian Process priors over the dictionary to account for the relative smoothness of the natural spectra, whereas the classifier parameters are sampled from multivariate Gaussians. We employ two Beta-Bernoulli processes to jointly infer the dictionary and the classifier. These processes are coupled under the same sets of Bernoulli distributions. In our approach, these distributions signify the frequency of the dictionary atom usage in representing class-specific training spectra, which also makes the dictionary discriminative. Due to the coupling between the dictionary and the classifier, the popularity of the atoms for representing different classes gets encoded into the classifier. This helps in predicting the class labels of test spectra that are first represented over the dictionary by solving a simultaneous sparse optimization problem. The labels of the spectra are predicted by feeding the resulting representations to the classifier. Our approach exploits the nonparametric Bayesian framework to automatically infer the dictionary size--the key parameter in discriminative dictionary learning. Moreover, it also has the desirable property of adaptively learning the association between the dictionary atoms and the class labels by itself. We use Gibbs sampling to infer the posterior probability distributions over the dictionary and the classifier under the proposed model, for which, we derive analytical expressions. To establish the effectiveness of our approach, we test it on benchmark hyperspectral images. The classification performance is compared with the state-of-the-art dictionary learning-based classification methods.

  19. Dictionary Approaches to Image Compression and Reconstruction

    NASA Technical Reports Server (NTRS)

    Ziyad, Nigel A.; Gilmore, Erwin T.; Chouikha, Mohamed F.

    1998-01-01

    This paper proposes using a collection of parameterized waveforms, known as a dictionary, for the purpose of medical image compression. These waveforms, denoted as phi(sub gamma), are discrete time signals, where gamma represents the dictionary index. A dictionary with a collection of these waveforms is typically complete or overcomplete. Given such a dictionary, the goal is to obtain a representation image based on the dictionary. We examine the effectiveness of applying Basis Pursuit (BP), Best Orthogonal Basis (BOB), Matching Pursuits (MP), and the Method of Frames (MOF) methods for the compression of digitized radiological images with a wavelet-packet dictionary. The performance of these algorithms is studied for medical images with and without additive noise.

  20. The Use of Electronic Dictionaries for Pronunciation Practice by University EFL Students

    ERIC Educational Resources Information Center

    Metruk, Rastislav

    2017-01-01

    This paper attempts to explore how Slovak learners of English use electronic dictionaries with regard to pronunciation practice and improvement. A total of 24 Slovak university students (subjects) completed a questionnaire which contained pronunciation-related questions in connection with the use of electronic dictionaries. The questions primarily…

  1. Semi-Supervised Sparse Representation Based Classification for Face Recognition With Insufficient Labeled Samples

    NASA Astrophysics Data System (ADS)

    Gao, Yuan; Ma, Jiayi; Yuille, Alan L.

    2017-05-01

    This paper addresses the problem of face recognition when there is only few, or even only a single, labeled examples of the face that we wish to recognize. Moreover, these examples are typically corrupted by nuisance variables, both linear (i.e., additive nuisance variables such as bad lighting, wearing of glasses) and non-linear (i.e., non-additive pixel-wise nuisance variables such as expression changes). The small number of labeled examples means that it is hard to remove these nuisance variables between the training and testing faces to obtain good recognition performance. To address the problem we propose a method called Semi-Supervised Sparse Representation based Classification (S$^3$RC). This is based on recent work on sparsity where faces are represented in terms of two dictionaries: a gallery dictionary consisting of one or more examples of each person, and a variation dictionary representing linear nuisance variables (e.g., different lighting conditions, different glasses). The main idea is that (i) we use the variation dictionary to characterize the linear nuisance variables via the sparsity framework, then (ii) prototype face images are estimated as a gallery dictionary via a Gaussian Mixture Model (GMM), with mixed labeled and unlabeled samples in a semi-supervised manner, to deal with the non-linear nuisance variations between labeled and unlabeled samples. We have done experiments with insufficient labeled samples, even when there is only a single labeled sample per person. Our results on the AR, Multi-PIE, CAS-PEAL, and LFW databases demonstrate that the proposed method is able to deliver significantly improved performance over existing methods.

  2. Cerebellar Functional Parcellation Using Sparse Dictionary Learning Clustering.

    PubMed

    Wang, Changqing; Kipping, Judy; Bao, Chenglong; Ji, Hui; Qiu, Anqi

    2016-01-01

    The human cerebellum has recently been discovered to contribute to cognition and emotion beyond the planning and execution of movement, suggesting its functional heterogeneity. We aimed to identify the functional parcellation of the cerebellum using information from resting-state functional magnetic resonance imaging (rs-fMRI). For this, we introduced a new data-driven decomposition-based functional parcellation algorithm, called Sparse Dictionary Learning Clustering (SDLC). SDLC integrates dictionary learning, sparse representation of rs-fMRI, and k-means clustering into one optimization problem. The dictionary is comprised of an over-complete set of time course signals, with which a sparse representation of rs-fMRI signals can be constructed. Cerebellar functional regions were then identified using k-means clustering based on the sparse representation of rs-fMRI signals. We solved SDLC using a multi-block hybrid proximal alternating method that guarantees strong convergence. We evaluated the reliability of SDLC and benchmarked its classification accuracy against other clustering techniques using simulated data. We then demonstrated that SDLC can identify biologically reasonable functional regions of the cerebellum as estimated by their cerebello-cortical functional connectivity. We further provided new insights into the cerebello-cortical functional organization in children.

  3. Standardized Representation of Clinical Study Data Dictionaries with CIMI Archetypes.

    PubMed

    Sharma, Deepak K; Solbrig, Harold R; Prud'hommeaux, Eric; Pathak, Jyotishman; Jiang, Guoqian

    2016-01-01

    Researchers commonly use a tabular format to describe and represent clinical study data. The lack of standardization of data dictionary's metadata elements presents challenges for their harmonization for similar studies and impedes interoperability outside the local context. We propose that representing data dictionaries in the form of standardized archetypes can help to overcome this problem. The Archetype Modeling Language (AML) as developed by the Clinical Information Modeling Initiative (CIMI) can serve as a common format for the representation of data dictionary models. We mapped three different data dictionaries (identified from dbGAP, PheKB and TCGA) onto AML archetypes by aligning dictionary variable definitions with the AML archetype elements. The near complete alignment of data dictionaries helped map them into valid AML models that captured all data dictionary model metadata. The outcome of the work would help subject matter experts harmonize data models for quality, semantic interoperability and better downstream data integration.

  4. Predefined Redundant Dictionary for Effective Depth Maps Representation

    NASA Astrophysics Data System (ADS)

    Sebai, Dorsaf; Chaieb, Faten; Ghorbel, Faouzi

    2016-01-01

    The multi-view video plus depth (MVD) video format consists of two components: texture and depth map, where a combination of these components enables a receiver to generate arbitrary virtual views. However, MVD presents a very voluminous video format that requires a compression process for storage and especially for transmission. Conventional codecs are perfectly efficient for texture images compression but not for intrinsic depth maps properties. Depth images indeed are characterized by areas of smoothly varying grey levels separated by sharp discontinuities at the position of object boundaries. Preserving these characteristics is important to enable high quality view synthesis at the receiver side. In this paper, sparse representation of depth maps is discussed. It is shown that a significant gain in sparsity is achieved when particular mixed dictionaries are used for approximating these types of images with greedy selection strategies. Experiments are conducted to confirm the effectiveness at producing sparse representations, and competitiveness, with respect to candidate state-of-art dictionaries. Finally, the resulting method is shown to be effective for depth maps compression and represents an advantage over the ongoing 3D high efficiency video coding compression standard, particularly at medium and high bitrates.

  5. Dictionaries and distributions: Combining expert knowledge and large scale textual data content analysis : Distributed dictionary representation.

    PubMed

    Garten, Justin; Hoover, Joe; Johnson, Kate M; Boghrati, Reihane; Iskiwitch, Carol; Dehghani, Morteza

    2018-02-01

    Theory-driven text analysis has made extensive use of psychological concept dictionaries, leading to a wide range of important results. These dictionaries have generally been applied through word count methods which have proven to be both simple and effective. In this paper, we introduce Distributed Dictionary Representations (DDR), a method that applies psychological dictionaries using semantic similarity rather than word counts. This allows for the measurement of the similarity between dictionaries and spans of text ranging from complete documents to individual words. We show how DDR enables dictionary authors to place greater emphasis on construct validity without sacrificing linguistic coverage. We further demonstrate the benefits of DDR on two real-world tasks and finally conduct an extensive study of the interaction between dictionary size and task performance. These studies allow us to examine how DDR and word count methods complement one another as tools for applying concept dictionaries and where each is best applied. Finally, we provide references to tools and resources to make this method both available and accessible to a broad psychological audience.

  6. 76 FR 39090 - Contract Reporting Requirements of Intrastate Natural Gas Companies; Notice of Extension of Time...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-07-05

    ... delay until 90 days after the revised Form No. 549D, XML schema format, and Data Dictionary and... Form 549D, the Data Dictionary and Instructions, notice is hereby given that all section 311 and... Data Dictionary and Instructions for filing Form 549D. Staff also corrected and completed testing of a...

  7. Discriminative Bayesian Dictionary Learning for Classification.

    PubMed

    Akhtar, Naveed; Shafait, Faisal; Mian, Ajmal

    2016-12-01

    We propose a Bayesian approach to learn discriminative dictionaries for sparse representation of data. The proposed approach infers probability distributions over the atoms of a discriminative dictionary using a finite approximation of Beta Process. It also computes sets of Bernoulli distributions that associate class labels to the learned dictionary atoms. This association signifies the selection probabilities of the dictionary atoms in the expansion of class-specific data. Furthermore, the non-parametric character of the proposed approach allows it to infer the correct size of the dictionary. We exploit the aforementioned Bernoulli distributions in separately learning a linear classifier. The classifier uses the same hierarchical Bayesian model as the dictionary, which we present along the analytical inference solution for Gibbs sampling. For classification, a test instance is first sparsely encoded over the learned dictionary and the codes are fed to the classifier. We performed experiments for face and action recognition; and object and scene-category classification using five public datasets and compared the results with state-of-the-art discriminative sparse representation approaches. Experiments show that the proposed Bayesian approach consistently outperforms the existing approaches.

  8. Standardized Representation of Clinical Study Data Dictionaries with CIMI Archetypes

    PubMed Central

    Sharma, Deepak K.; Solbrig, Harold R.; Prud’hommeaux, Eric; Pathak, Jyotishman; Jiang, Guoqian

    2016-01-01

    Researchers commonly use a tabular format to describe and represent clinical study data. The lack of standardization of data dictionary’s metadata elements presents challenges for their harmonization for similar studies and impedes interoperability outside the local context. We propose that representing data dictionaries in the form of standardized archetypes can help to overcome this problem. The Archetype Modeling Language (AML) as developed by the Clinical Information Modeling Initiative (CIMI) can serve as a common format for the representation of data dictionary models. We mapped three different data dictionaries (identified from dbGAP, PheKB and TCGA) onto AML archetypes by aligning dictionary variable definitions with the AML archetype elements. The near complete alignment of data dictionaries helped map them into valid AML models that captured all data dictionary model metadata. The outcome of the work would help subject matter experts harmonize data models for quality, semantic interoperability and better downstream data integration. PMID:28269909

  9. An Improved Sparse Representation over Learned Dictionary Method for Seizure Detection.

    PubMed

    Li, Junhui; Zhou, Weidong; Yuan, Shasha; Zhang, Yanli; Li, Chengcheng; Wu, Qi

    2016-02-01

    Automatic seizure detection has played an important role in the monitoring, diagnosis and treatment of epilepsy. In this paper, a patient specific method is proposed for seizure detection in the long-term intracranial electroencephalogram (EEG) recordings. This seizure detection method is based on sparse representation with online dictionary learning and elastic net constraint. The online learned dictionary could sparsely represent the testing samples more accurately, and the elastic net constraint which combines the 11-norm and 12-norm not only makes the coefficients sparse but also avoids over-fitting problem. First, the EEG signals are preprocessed using wavelet filtering and differential filtering, and the kernel function is applied to make the samples closer to linearly separable. Then the dictionaries of seizure and nonseizure are respectively learned from original ictal and interictal training samples with online dictionary optimization algorithm to compose the training dictionary. After that, the test samples are sparsely coded over the learned dictionary and the residuals associated with ictal and interictal sub-dictionary are calculated, respectively. Eventually, the test samples are classified as two distinct categories, seizure or nonseizure, by comparing the reconstructed residuals. The average segment-based sensitivity of 95.45%, specificity of 99.08%, and event-based sensitivity of 94.44% with false detection rate of 0.23/h and average latency of -5.14 s have been achieved with our proposed method.

  10. Effective description of general extensions of the Standard Model: the complete tree-level dictionary

    NASA Astrophysics Data System (ADS)

    de Blas, J.; Criado, J. C.; Pérez-Victoria, M.; Santiago, J.

    2018-03-01

    We compute all the tree-level contributions to the Wilson coefficients of the dimension-six Standard-Model effective theory in ultraviolet completions with general scalar, spinor and vector field content and arbitrary interactions. No assumption about the renormalizability of the high-energy theory is made. This provides a complete ultraviolet/infrared dictionary at the classical level, which can be used to study the low-energy implications of any model of interest, and also to look for explicit completions consistent with low-energy data.

  11. Alzheimer's disease detection via automatic 3D caudate nucleus segmentation using coupled dictionary learning with level set formulation.

    PubMed

    Al-Shaikhli, Saif Dawood Salman; Yang, Michael Ying; Rosenhahn, Bodo

    2016-12-01

    This paper presents a novel method for Alzheimer's disease classification via an automatic 3D caudate nucleus segmentation. The proposed method consists of segmentation and classification steps. In the segmentation step, we propose a novel level set cost function. The proposed cost function is constrained by a sparse representation of local image features using a dictionary learning method. We present coupled dictionaries: a feature dictionary of a grayscale brain image and a label dictionary of a caudate nucleus label image. Using online dictionary learning, the coupled dictionaries are learned from the training data. The learned coupled dictionaries are embedded into a level set function. In the classification step, a region-based feature dictionary is built. The region-based feature dictionary is learned from shape features of the caudate nucleus in the training data. The classification is based on the measure of the similarity between the sparse representation of region-based shape features of the segmented caudate in the test image and the region-based feature dictionary. The experimental results demonstrate the superiority of our method over the state-of-the-art methods by achieving a high segmentation (91.5%) and classification (92.5%) accuracy. In this paper, we find that the study of the caudate nucleus atrophy gives an advantage over the study of whole brain structure atrophy to detect Alzheimer's disease. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  12. A Study of the Use of a Monolingual Pedagogical Dictionary by Learners of English Engaged in Writing.

    ERIC Educational Resources Information Center

    Harvey, Keith; Yuill, Deborah

    1997-01-01

    Presents an account of a study of the role played by a dictionary in the completion of written (encoding) tasks by students of English as a foreign language. The study uses an introspective methodology based on the completion of flowcharts. Results indicate the importance of information on spelling and meanings and the neglect of coded syntactic…

  13. Discriminative Structured Dictionary Learning on Grassmann Manifolds and Its Application on Image Restoration.

    PubMed

    Pan, Han; Jing, Zhongliang; Qiao, Lingfeng; Li, Minzhe

    2017-09-25

    Image restoration is a difficult and challenging problem in various imaging applications. However, despite of the benefits of a single overcomplete dictionary, there are still several challenges for capturing the geometric structure of image of interest. To more accurately represent the local structures of the underlying signals, we propose a new problem formulation for sparse representation with block-orthogonal constraint. There are three contributions. First, a framework for discriminative structured dictionary learning is proposed, which leads to a smooth manifold structure and quotient search spaces. Second, an alternating minimization scheme is proposed after taking both the cost function and the constraints into account. This is achieved by iteratively alternating between updating the block structure of the dictionary defined on Grassmann manifold and sparsifying the dictionary atoms automatically. Third, Riemannian conjugate gradient is considered to track local subspaces efficiently with a convergence guarantee. Extensive experiments on various datasets demonstrate that the proposed method outperforms the state-of-the-art methods on the removal of mixed Gaussian-impulse noise.

  14. Chemical annotation of small and peptide-like molecules at the Protein Data Bank

    PubMed Central

    Young, Jasmine Y.; Feng, Zukang; Dimitropoulos, Dimitris; Sala, Raul; Westbrook, John; Zhuravleva, Marina; Shao, Chenghua; Quesada, Martha; Peisach, Ezra; Berman, Helen M.

    2013-01-01

    Over the past decade, the number of polymers and their complexes with small molecules in the Protein Data Bank archive (PDB) has continued to increase significantly. To support scientific advancements and ensure the best quality and completeness of the data files over the next 10 years and beyond, the Worldwide PDB partnership that manages the PDB archive is developing a new deposition and annotation system. This system focuses on efficient data capture across all supported experimental methods. The new deposition and annotation system is composed of four major modules that together support all of the processing requirements for a PDB entry. In this article, we describe one such module called the Chemical Component Annotation Tool. This tool uses information from both the Chemical Component Dictionary and Biologically Interesting molecule Reference Dictionary to aid in annotation. Benchmark studies have shown that the Chemical Component Annotation Tool provides significant improvements in processing efficiency and data quality. Database URL: http://wwpdb.org PMID:24291661

  15. Chemical annotation of small and peptide-like molecules at the Protein Data Bank.

    PubMed

    Young, Jasmine Y; Feng, Zukang; Dimitropoulos, Dimitris; Sala, Raul; Westbrook, John; Zhuravleva, Marina; Shao, Chenghua; Quesada, Martha; Peisach, Ezra; Berman, Helen M

    2013-01-01

    Over the past decade, the number of polymers and their complexes with small molecules in the Protein Data Bank archive (PDB) has continued to increase significantly. To support scientific advancements and ensure the best quality and completeness of the data files over the next 10 years and beyond, the Worldwide PDB partnership that manages the PDB archive is developing a new deposition and annotation system. This system focuses on efficient data capture across all supported experimental methods. The new deposition and annotation system is composed of four major modules that together support all of the processing requirements for a PDB entry. In this article, we describe one such module called the Chemical Component Annotation Tool. This tool uses information from both the Chemical Component Dictionary and Biologically Interesting molecule Reference Dictionary to aid in annotation. Benchmark studies have shown that the Chemical Component Annotation Tool provides significant improvements in processing efficiency and data quality. Database URL: http://wwpdb.org.

  16. Classification of multispectral or hyperspectral satellite imagery using clustering of sparse approximations on sparse representations in learned dictionaries obtained using efficient convolutional sparse coding

    DOEpatents

    Moody, Daniela; Wohlberg, Brendt

    2018-01-02

    An approach for land cover classification, seasonal and yearly change detection and monitoring, and identification of changes in man-made features may use a clustering of sparse approximations (CoSA) on sparse representations in learned dictionaries. The learned dictionaries may be derived using efficient convolutional sparse coding to build multispectral or hyperspectral, multiresolution dictionaries that are adapted to regional satellite image data. Sparse image representations of images over the learned dictionaries may be used to perform unsupervised k-means clustering into land cover categories. The clustering process behaves as a classifier in detecting real variability. This approach may combine spectral and spatial textural characteristics to detect geologic, vegetative, hydrologic, and man-made features, as well as changes in these features over time.

  17. Image super-resolution via sparse representation.

    PubMed

    Yang, Jianchao; Wright, John; Huang, Thomas S; Ma, Yi

    2010-11-01

    This paper presents a new approach to single-image super-resolution, based on sparse signal representation. Research on image statistics suggests that image patches can be well-represented as a sparse linear combination of elements from an appropriately chosen over-complete dictionary. Inspired by this observation, we seek a sparse representation for each patch of the low-resolution input, and then use the coefficients of this representation to generate the high-resolution output. Theoretical results from compressed sensing suggest that under mild conditions, the sparse representation can be correctly recovered from the downsampled signals. By jointly training two dictionaries for the low- and high-resolution image patches, we can enforce the similarity of sparse representations between the low resolution and high resolution image patch pair with respect to their own dictionaries. Therefore, the sparse representation of a low resolution image patch can be applied with the high resolution image patch dictionary to generate a high resolution image patch. The learned dictionary pair is a more compact representation of the patch pairs, compared to previous approaches, which simply sample a large amount of image patch pairs, reducing the computational cost substantially. The effectiveness of such a sparsity prior is demonstrated for both general image super-resolution and the special case of face hallucination. In both cases, our algorithm generates high-resolution images that are competitive or even superior in quality to images produced by other similar SR methods. In addition, the local sparse modeling of our approach is naturally robust to noise, and therefore the proposed algorithm can handle super-resolution with noisy inputs in a more unified framework.

  18. High resolution OCT image generation using super resolution via sparse representation

    NASA Astrophysics Data System (ADS)

    Asif, Muhammad; Akram, Muhammad Usman; Hassan, Taimur; Shaukat, Arslan; Waqar, Razi

    2017-02-01

    In this paper we propose a technique for obtaining a high resolution (HR) image from a single low resolution (LR) image -using joint learning dictionary - on the basis of image statistic research. It suggests that with an appropriate choice of an over-complete dictionary, image patches can be well represented as a sparse linear combination. Medical imaging for clinical analysis and medical intervention is being used for creating visual representations of the interior of a body, as well as visual representation of the function of some organs or tissues (physiology). A number of medical imaging techniques are in use like MRI, CT scan, X-rays and Optical Coherence Tomography (OCT). OCT is one of the new technologies in medical imaging and one of its uses is in ophthalmology where it is being used for analysis of the choroidal thickness in the eyes in healthy and disease states such as age-related macular degeneration, central serous chorioretinopathy, diabetic retinopathy and inherited retinal dystrophies. We have proposed a technique for enhancing the OCT images which can be used for clearly identifying and analyzing the particular diseases. Our method uses dictionary learning technique for generating a high resolution image from a single input LR image. We train two joint dictionaries, one with OCT images and the second with multiple different natural images, and compare the results with previous SR technique. Proposed method for both dictionaries produces HR images which are comparatively superior in quality with the other proposed method of SR. Proposed technique is very effective for noisy OCT images and produces up-sampled and enhanced OCT images.

  19. Robust recognition of degraded machine-printed characters using complementary similarity measure and error-correction learning

    NASA Astrophysics Data System (ADS)

    Hagita, Norihiro; Sawaki, Minako

    1995-03-01

    Most conventional methods in character recognition extract geometrical features such as stroke direction, connectivity of strokes, etc., and compare them with reference patterns in a stored dictionary. Unfortunately, geometrical features are easily degraded by blurs, stains and the graphical background designs used in Japanese newspaper headlines. This noise must be removed before recognition commences, but no preprocessing method is completely accurate. This paper proposes a method for recognizing degraded characters and characters printed on graphical background designs. This method is based on the binary image feature method and uses binary images as features. A new similarity measure, called the complementary similarity measure, is used as a discriminant function. It compares the similarity and dissimilarity of binary patterns with reference dictionary patterns. Experiments are conducted using the standard character database ETL-2 which consists of machine-printed Kanji, Hiragana, Katakana, alphanumeric, an special characters. The results show that this method is much more robust against noise than the conventional geometrical feature method. It also achieves high recognition rates of over 92% for characters with textured foregrounds, over 98% for characters with textured backgrounds, over 98% for outline fonts, and over 99% for reverse contrast characters.

  20. Dictionary of Marketing Terms.

    ERIC Educational Resources Information Center

    Everhardt, Richard M.

    A listing of words and definitions compiled from more than 10 college and high school textbooks are presented in this dictionary of marketing terms. Over 1,200 entries of terms used in retailing, wholesaling, economics, and investments are included. This dictionary was designed to aid both instructors and students to better understand the…

  1. Robust sliding-window reconstruction for Accelerating the acquisition of MR fingerprinting.

    PubMed

    Cao, Xiaozhi; Liao, Congyu; Wang, Zhixing; Chen, Ying; Ye, Huihui; He, Hongjian; Zhong, Jianhui

    2017-10-01

    To develop a method for accelerated and robust MR fingerprinting (MRF) with improved image reconstruction and parameter matching processes. A sliding-window (SW) strategy was applied to MRF, in which signal and dictionary matching was conducted between fingerprints consisting of mixed-contrast image series reconstructed from consecutive data frames segmented by a sliding window, and a precalculated mixed-contrast dictionary. The effectiveness and performance of this new method, dubbed SW-MRF, was evaluated in both phantom and in vivo. Error quantifications were conducted on results obtained with various settings of SW reconstruction parameters. Compared with the original MRF strategy, the results of both phantom and in vivo experiments demonstrate that the proposed SW-MRF strategy either provided similar accuracy with reduced acquisition time, or improved accuracy with equal acquisition time. Parametric maps of T 1 , T 2 , and proton density of comparable quality could be achieved with a two-fold or more reduction in acquisition time. The effect of sliding-window width on dictionary sensitivity was also estimated. The novel SW-MRF recovers high quality image frames from highly undersampled MRF data, which enables more robust dictionary matching with reduced numbers of data frames. This time efficiency may facilitate MRF applications in time-critical clinical settings. Magn Reson Med 78:1579-1588, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  2. Tensor Dictionary Learning for Positive Definite Matrices.

    PubMed

    Sivalingam, Ravishankar; Boley, Daniel; Morellas, Vassilios; Papanikolopoulos, Nikolaos

    2015-11-01

    Sparse models have proven to be extremely successful in image processing and computer vision. However, a majority of the effort has been focused on sparse representation of vectors and low-rank models for general matrices. The success of sparse modeling, along with popularity of region covariances, has inspired the development of sparse coding approaches for these positive definite descriptors. While in earlier work, the dictionary was formed from all, or a random subset of, the training signals, it is clearly advantageous to learn a concise dictionary from the entire training set. In this paper, we propose a novel approach for dictionary learning over positive definite matrices. The dictionary is learned by alternating minimization between sparse coding and dictionary update stages, and different atom update methods are described. A discriminative version of the dictionary learning approach is also proposed, which simultaneously learns dictionaries for different classes in classification or clustering. Experimental results demonstrate the advantage of learning dictionaries from data both from reconstruction and classification viewpoints. Finally, a software library is presented comprising C++ binaries for all the positive definite sparse coding and dictionary learning approaches presented here.

  3. The New Oxford Picture Dictionary, English/Navajo Edition.

    ERIC Educational Resources Information Center

    Parnwell, E. C.

    This picture dictionary illustrates over 2,400 words. The dictionary is organized thematically, beginning with topics most useful for the survival needs of students in an English speaking country. However, teachers may adapt the order to reflect the needs of their students. Verbs are included on separate pages, but within topic areas in which they…

  4. The Oxford Picture Dictionary. Beginning Workbook.

    ERIC Educational Resources Information Center

    Fuchs, Marjorie

    The beginning workbook of the Oxford Picture Dictionary is in full color and offers vocabulary reinforcement activities that correspond page for page with the dictionary. Clear and simple instructions with examples make it suitable for independent use in the classroom or at home. The workbook has up-to-date art and graphics, explaining over 3700…

  5. Dictionary Culture of University Students Learning English as a Foreign Language in Turkey

    ERIC Educational Resources Information Center

    Baskin, Sami; Mumcu, Muhsin

    2018-01-01

    Dictionaries, one of the oldest tools of language education, have continued to be a part of education although information technologies and concept of education has changed over time. Until today, with the help of the developments in technology both types of dictionaries have increased, and usage areas have expanded. Therefore, it is possible to…

  6. Enhancing a Web Crawler with Arabic Search Capability

    DTIC Science & Technology

    2010-09-01

    7 Figure 2. Monolingual 11-point precision results. From [14]...........................................8 Figure 3. Lucene...libraries (prefixes dictionary , stems dictionary and suffixes dictionary ). If all the word elements (prefix, stem, suffix) are found in their...stemmer improved over 90% in average precision from raw retrieval. The authors concluded that stemming is very effective on Arabic IR. For monolingual

  7. A Generative Theory of Relevance

    DTIC Science & Technology

    2004-09-01

    73 5.3.1.4 Parameter estimation with a dictionary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 5.3.1.5 Document ranking...engine [3]. The stemmer combines morphological rules with a large dictionary of special cases and exceptions. After stemming, 418 stop-words from the...goes over all Arabic training strings. Bulgarian definitions are identical. 5.3.1.4 Parameter estimation with a dictionary Parallel and comparable

  8. Classic Classroom Activities: The Oxford Picture Dictionary Program.

    ERIC Educational Resources Information Center

    Weiss, Renee; Adelson-Goldstein, Jayme; Shapiro, Norma

    This teacher resource book offers over 100 reproducible communicative practice activities and 768 picture cards based on the vocabulary of the Oxford Picture Dictionary. Teacher's notes and instructions, including adaptations for multilevel classes, are provided. The activities book has up-to-date art and graphics, explaining over 3700 words. The…

  9. The Contextual over the Referential in Military Translation

    ERIC Educational Resources Information Center

    Al-Ma'ani, Musallam

    2015-01-01

    Dictionaries of all types, monolingual or bilingual, specialized or general, form the basic tools for both undergraduate translation students (UTSs) and professional translators (PTs). However, it is generally accepted that the difference between UTSs and PTs is that UTSs normally over-rely on dictionaries, which produces unsatisfactory results.…

  10. Dictionary-driven protein annotation.

    PubMed

    Rigoutsos, Isidore; Huynh, Tien; Floratos, Aris; Parida, Laxmi; Platt, Daniel

    2002-09-01

    Computational methods seeking to automatically determine the properties (functional, structural, physicochemical, etc.) of a protein directly from the sequence have long been the focus of numerous research groups. With the advent of advanced sequencing methods and systems, the number of amino acid sequences that are being deposited in the public databases has been increasing steadily. This has in turn generated a renewed demand for automated approaches that can annotate individual sequences and complete genomes quickly, exhaustively and objectively. In this paper, we present one such approach that is centered around and exploits the Bio-Dictionary, a collection of amino acid patterns that completely covers the natural sequence space and can capture functional and structural signals that have been reused during evolution, within and across protein families. Our annotation approach also makes use of a weighted, position-specific scoring scheme that is unaffected by the over-representation of well-conserved proteins and protein fragments in the databases used. For a given query sequence, the method permits one to determine, in a single pass, the following: local and global similarities between the query and any protein already present in a public database; the likeness of the query to all available archaeal/ bacterial/eukaryotic/viral sequences in the database as a function of amino acid position within the query; the character of secondary structure of the query as a function of amino acid position within the query; the cytoplasmic, transmembrane or extracellular behavior of the query; the nature and position of binding domains, active sites, post-translationally modified sites, signal peptides, etc. In terms of performance, the proposed method is exhaustive, objective and allows for the rapid annotation of individual sequences and full genomes. Annotation examples are presented and discussed in Results, including individual queries and complete genomes that were released publicly after we built the Bio-Dictionary that is used in our experiments. Finally, we have computed the annotations of more than 70 complete genomes and made them available on the World Wide Web at http://cbcsrv.watson.ibm.com/Annotations/.

  11. Bayesian nonparametric dictionary learning for compressed sensing MRI.

    PubMed

    Huang, Yue; Paisley, John; Lin, Qin; Ding, Xinghao; Fu, Xueyang; Zhang, Xiao-Ping

    2014-12-01

    We develop a Bayesian nonparametric model for reconstructing magnetic resonance images (MRIs) from highly undersampled k -space data. We perform dictionary learning as part of the image reconstruction process. To this end, we use the beta process as a nonparametric dictionary learning prior for representing an image patch as a sparse combination of dictionary elements. The size of the dictionary and patch-specific sparsity pattern are inferred from the data, in addition to other dictionary learning variables. Dictionary learning is performed directly on the compressed image, and so is tailored to the MRI being considered. In addition, we investigate a total variation penalty term in combination with the dictionary learning model, and show how the denoising property of dictionary learning removes dependence on regularization parameters in the noisy setting. We derive a stochastic optimization algorithm based on Markov chain Monte Carlo for the Bayesian model, and use the alternating direction method of multipliers for efficiently performing total variation minimization. We present empirical results on several MRI, which show that the proposed regularization framework can improve reconstruction accuracy over other methods.

  12. Weighted Discriminative Dictionary Learning based on Low-rank Representation

    NASA Astrophysics Data System (ADS)

    Chang, Heyou; Zheng, Hao

    2017-01-01

    Low-rank representation has been widely used in the field of pattern classification, especially when both training and testing images are corrupted with large noise. Dictionary plays an important role in low-rank representation. With respect to the semantic dictionary, the optimal representation matrix should be block-diagonal. However, traditional low-rank representation based dictionary learning methods cannot effectively exploit the discriminative information between data and dictionary. To address this problem, this paper proposed weighted discriminative dictionary learning based on low-rank representation, where a weighted representation regularization term is constructed. The regularization associates label information of both training samples and dictionary atoms, and encourages to generate a discriminative representation with class-wise block-diagonal structure, which can further improve the classification performance where both training and testing images are corrupted with large noise. Experimental results demonstrate advantages of the proposed method over the state-of-the-art methods.

  13. A novel algorithm of super-resolution image reconstruction based on multi-class dictionaries for natural scene

    NASA Astrophysics Data System (ADS)

    Wu, Wei; Zhao, Dewei; Zhang, Huan

    2015-12-01

    Super-resolution image reconstruction is an effective method to improve the image quality. It has important research significance in the field of image processing. However, the choice of the dictionary directly affects the efficiency of image reconstruction. A sparse representation theory is introduced into the problem of the nearest neighbor selection. Based on the sparse representation of super-resolution image reconstruction method, a super-resolution image reconstruction algorithm based on multi-class dictionary is analyzed. This method avoids the redundancy problem of only training a hyper complete dictionary, and makes the sub-dictionary more representatives, and then replaces the traditional Euclidean distance computing method to improve the quality of the whole image reconstruction. In addition, the ill-posed problem is introduced into non-local self-similarity regularization. Experimental results show that the algorithm is much better results than state-of-the-art algorithm in terms of both PSNR and visual perception.

  14. Fast Low-Rank Shared Dictionary Learning for Image Classification.

    PubMed

    Tiep Huu Vu; Monga, Vishal

    2017-11-01

    Despite the fact that different objects possess distinct class-specific features, they also usually share common patterns. This observation has been exploited partially in a recently proposed dictionary learning framework by separating the particularity and the commonality (COPAR). Inspired by this, we propose a novel method to explicitly and simultaneously learn a set of common patterns as well as class-specific features for classification with more intuitive constraints. Our dictionary learning framework is hence characterized by both a shared dictionary and particular (class-specific) dictionaries. For the shared dictionary, we enforce a low-rank constraint, i.e., claim that its spanning subspace should have low dimension and the coefficients corresponding to this dictionary should be similar. For the particular dictionaries, we impose on them the well-known constraints stated in the Fisher discrimination dictionary learning (FDDL). Furthermore, we develop new fast and accurate algorithms to solve the subproblems in the learning step, accelerating its convergence. The said algorithms could also be applied to FDDL and its extensions. The efficiencies of these algorithms are theoretically and experimentally verified by comparing their complexities and running time with those of other well-known dictionary learning methods. Experimental results on widely used image data sets establish the advantages of our method over the state-of-the-art dictionary learning methods.

  15. Fast Low-Rank Shared Dictionary Learning for Image Classification

    NASA Astrophysics Data System (ADS)

    Vu, Tiep Huu; Monga, Vishal

    2017-11-01

    Despite the fact that different objects possess distinct class-specific features, they also usually share common patterns. This observation has been exploited partially in a recently proposed dictionary learning framework by separating the particularity and the commonality (COPAR). Inspired by this, we propose a novel method to explicitly and simultaneously learn a set of common patterns as well as class-specific features for classification with more intuitive constraints. Our dictionary learning framework is hence characterized by both a shared dictionary and particular (class-specific) dictionaries. For the shared dictionary, we enforce a low-rank constraint, i.e. claim that its spanning subspace should have low dimension and the coefficients corresponding to this dictionary should be similar. For the particular dictionaries, we impose on them the well-known constraints stated in the Fisher discrimination dictionary learning (FDDL). Further, we develop new fast and accurate algorithms to solve the subproblems in the learning step, accelerating its convergence. The said algorithms could also be applied to FDDL and its extensions. The efficiencies of these algorithms are theoretically and experimentally verified by comparing their complexities and running time with those of other well-known dictionary learning methods. Experimental results on widely used image datasets establish the advantages of our method over state-of-the-art dictionary learning methods.

  16. A Remote Sensing Image Fusion Method based on adaptive dictionary learning

    NASA Astrophysics Data System (ADS)

    He, Tongdi; Che, Zongxi

    2018-01-01

    This paper discusses using a remote sensing fusion method, based on' adaptive sparse representation (ASP)', to provide improved spectral information, reduce data redundancy and decrease system complexity. First, the training sample set is formed by taking random blocks from the images to be fused, the dictionary is then constructed using the training samples, and the remaining terms are clustered to obtain the complete dictionary by iterated processing at each step. Second, the self-adaptive weighted coefficient rule of regional energy is used to select the feature fusion coefficients and complete the reconstruction of the image blocks. Finally, the reconstructed image blocks are rearranged and an average is taken to obtain the final fused images. Experimental results show that the proposed method is superior to other traditional remote sensing image fusion methods in both spectral information preservation and spatial resolution.

  17. Dictionary of Films. Translated, Edited, and Updated by Peter Morris.

    ERIC Educational Resources Information Center

    Sadoul, Georges

    In an attempt ot give a panorama of world cinema since its origins, this dictionary contains entries for about 1200 films from all over the world. A brief description of the plot of the film, the personnel involved in the production, and often some short, critical comments are included for each film. This dictionary is a companion volume to a…

  18. A Dictionary Approach to Electron Backscatter Diffraction Indexing.

    PubMed

    Chen, Yu H; Park, Se Un; Wei, Dennis; Newstadt, Greg; Jackson, Michael A; Simmons, Jeff P; De Graef, Marc; Hero, Alfred O

    2015-06-01

    We propose a framework for indexing of grain and subgrain structures in electron backscatter diffraction patterns of polycrystalline materials. We discretize the domain of a dynamical forward model onto a dense grid of orientations, producing a dictionary of patterns. For each measured pattern, we identify the most similar patterns in the dictionary, and identify boundaries, detect anomalies, and index crystal orientations. The statistical distribution of these closest matches is used in an unsupervised binary decision tree (DT) classifier to identify grain boundaries and anomalous regions. The DT classifies a pattern as an anomaly if it has an abnormally low similarity to any pattern in the dictionary. It classifies a pixel as being near a grain boundary if the highly ranked patterns in the dictionary differ significantly over the pixel's neighborhood. Indexing is accomplished by computing the mean orientation of the closest matches to each pattern. The mean orientation is estimated using a maximum likelihood approach that models the orientation distribution as a mixture of Von Mises-Fisher distributions over the quaternionic three sphere. The proposed dictionary matching approach permits segmentation, anomaly detection, and indexing to be performed in a unified manner with the additional benefit of uncertainty quantification.

  19. Infrared moving small target detection based on saliency extraction and image sparse representation

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaomin; Ren, Kan; Gao, Jin; Li, Chaowei; Gu, Guohua; Wan, Minjie

    2016-10-01

    Moving small target detection in infrared image is a crucial technique of infrared search and tracking system. This paper present a novel small target detection technique based on frequency-domain saliency extraction and image sparse representation. First, we exploit the features of Fourier spectrum image and magnitude spectrum of Fourier transform to make a rough extract of saliency regions and use a threshold segmentation system to classify the regions which look salient from the background, which gives us a binary image as result. Second, a new patch-image model and over-complete dictionary were introduced to the detection system, then the infrared small target detection was converted into a problem solving and optimization process of patch-image information reconstruction based on sparse representation. More specifically, the test image and binary image can be decomposed into some image patches follow certain rules. We select the target potential area according to the binary patch-image which contains salient region information, then exploit the over-complete infrared small target dictionary to reconstruct the test image blocks which may contain targets. The coefficients of target image patch satisfy sparse features. Finally, for image sequence, Euclidean distance was used to reduce false alarm ratio and increase the detection accuracy of moving small targets in infrared images due to the target position correlation between frames.

  20. Using dictionaries to study the mental lexicon.

    PubMed

    Anshen, F; Aronoff, M

    The notion of a mental lexicon has its historical roots in practical reference dictionaries. The distributional analysis of dictionaries provides one means of investigating the structure of the mental lexicon. We review our earlier work with dictionaries, based on a three-way horserace model of lexical access and production, and then present the most recent results of our ongoing analysis of the Oxford English Dictionary, Second Edition on CD-ROM, which traces changes in productivity over time of the English suffixes -ment and -ity, both of which originate in French borrowings. Our results lead us to question the validity of automatic analogy from a set of existing words as the driving force behind morphological productivity. Copyright 1999 Academic Press.

  1. Sparsity-promoting orthogonal dictionary updating for image reconstruction from highly undersampled magnetic resonance data.

    PubMed

    Huang, Jinhong; Guo, Li; Feng, Qianjin; Chen, Wufan; Feng, Yanqiu

    2015-07-21

    Image reconstruction from undersampled k-space data accelerates magnetic resonance imaging (MRI) by exploiting image sparseness in certain transform domains. Employing image patch representation over a learned dictionary has the advantage of being adaptive to local image structures and thus can better sparsify images than using fixed transforms (e.g. wavelets and total variations). Dictionary learning methods have recently been introduced to MRI reconstruction, and these methods demonstrate significantly reduced reconstruction errors compared to sparse MRI reconstruction using fixed transforms. However, the synthesis sparse coding problem in dictionary learning is NP-hard and computationally expensive. In this paper, we present a novel sparsity-promoting orthogonal dictionary updating method for efficient image reconstruction from highly undersampled MRI data. The orthogonality imposed on the learned dictionary enables the minimization problem in the reconstruction to be solved by an efficient optimization algorithm which alternately updates representation coefficients, orthogonal dictionary, and missing k-space data. Moreover, both sparsity level and sparse representation contribution using updated dictionaries gradually increase during iterations to recover more details, assuming the progressively improved quality of the dictionary. Simulation and real data experimental results both demonstrate that the proposed method is approximately 10 to 100 times faster than the K-SVD-based dictionary learning MRI method and simultaneously improves reconstruction accuracy.

  2. Gapped Spectral Dictionaries and Their Applications for Database Searches of Tandem Mass Spectra*

    PubMed Central

    Jeong, Kyowon; Kim, Sangtae; Bandeira, Nuno; Pevzner, Pavel A.

    2011-01-01

    Generating all plausible de novo interpretations of a peptide tandem mass (MS/MS) spectrum (Spectral Dictionary) and quickly matching them against the database represent a recently emerged alternative approach to peptide identification. However, the sizes of the Spectral Dictionaries quickly grow with the peptide length making their generation impractical for long peptides. We introduce Gapped Spectral Dictionaries (all plausible de novo interpretations with gaps) that can be easily generated for any peptide length thus addressing the limitation of the Spectral Dictionary approach. We show that Gapped Spectral Dictionaries are small thus opening a possibility of using them to speed-up MS/MS searches. Our MS-GappedDictionary algorithm (based on Gapped Spectral Dictionaries) enables proteogenomics applications (such as searches in the six-frame translation of the human genome) that are prohibitively time consuming with existing approaches. MS-GappedDictionary generates gapped peptides that occupy a niche between accurate but short peptide sequence tags and long but inaccurate full length peptide reconstructions. We show that, contrary to conventional wisdom, some high-quality spectra do not have good peptide sequence tags and introduce gapped tags that have advantages over the conventional peptide sequence tags in MS/MS database searches. PMID:21444829

  3. Highly Scalable Matching Pursuit Signal Decomposition Algorithm

    NASA Technical Reports Server (NTRS)

    Christensen, Daniel; Das, Santanu; Srivastava, Ashok N.

    2009-01-01

    Matching Pursuit Decomposition (MPD) is a powerful iterative algorithm for signal decomposition and feature extraction. MPD decomposes any signal into linear combinations of its dictionary elements or atoms . A best fit atom from an arbitrarily defined dictionary is determined through cross-correlation. The selected atom is subtracted from the signal and this procedure is repeated on the residual in the subsequent iterations until a stopping criterion is met. The reconstructed signal reveals the waveform structure of the original signal. However, a sufficiently large dictionary is required for an accurate reconstruction; this in return increases the computational burden of the algorithm, thus limiting its applicability and level of adoption. The purpose of this research is to improve the scalability and performance of the classical MPD algorithm. Correlation thresholds were defined to prune insignificant atoms from the dictionary. The Coarse-Fine Grids and Multiple Atom Extraction techniques were proposed to decrease the computational burden of the algorithm. The Coarse-Fine Grids method enabled the approximation and refinement of the parameters for the best fit atom. The ability to extract multiple atoms within a single iteration enhanced the effectiveness and efficiency of each iteration. These improvements were implemented to produce an improved Matching Pursuit Decomposition algorithm entitled MPD++. Disparate signal decomposition applications may require a particular emphasis of accuracy or computational efficiency. The prominence of the key signal features required for the proper signal classification dictates the level of accuracy necessary in the decomposition. The MPD++ algorithm may be easily adapted to accommodate the imposed requirements. Certain feature extraction applications may require rapid signal decomposition. The full potential of MPD++ may be utilized to produce incredible performance gains while extracting only slightly less energy than the standard algorithm. When the utmost accuracy must be achieved, the modified algorithm extracts atoms more conservatively but still exhibits computational gains over classical MPD. The MPD++ algorithm was demonstrated using an over-complete dictionary on real life data. Computational times were reduced by factors of 1.9 and 44 for the emphases of accuracy and performance, respectively. The modified algorithm extracted similar amounts of energy compared to classical MPD. The degree of the improvement in computational time depends on the complexity of the data, the initialization parameters, and the breadth of the dictionary. The results of the research confirm that the three modifications successfully improved the scalability and computational efficiency of the MPD algorithm. Correlation Thresholding decreased the time complexity by reducing the dictionary size. Multiple Atom Extraction also reduced the time complexity by decreasing the number of iterations required for a stopping criterion to be reached. The Course-Fine Grids technique enabled complicated atoms with numerous variable parameters to be effectively represented in the dictionary. Due to the nature of the three proposed modifications, they are capable of being stacked and have cumulative effects on the reduction of the time complexity.

  4. High-recall protein entity recognition using a dictionary

    PubMed Central

    Kou, Zhenzhen; Cohen, William W.; Murphy, Robert F.

    2010-01-01

    Protein name extraction is an important step in mining biological literature. We describe two new methods for this task: semiCRFs and dictionary HMMs. SemiCRFs are a recently-proposed extension to conditional random fields that enables more effective use of dictionary information as features. Dictionary HMMs are a technique in which a dictionary is converted to a large HMM that recognizes phrases from the dictionary, as well as variations of these phrases. Standard training methods for HMMs can be used to learn which variants should be recognized. We compared the performance of our new approaches to that of Maximum Entropy (Max-Ent) and normal CRFs on three datasets, and improvement was obtained for all four methods over the best published results for two of the datasets. CRFs and semiCRFs achieved the highest overall performance according to the widely-used F-measure, while the dictionary HMMs performed the best at finding entities that actually appear in the dictionary—the measure of most interest in our intended application. PMID:15961466

  5. LeadMine: a grammar and dictionary driven approach to entity recognition.

    PubMed

    Lowe, Daniel M; Sayle, Roger A

    2015-01-01

    Chemical entity recognition has traditionally been performed by machine learning approaches. Here we describe an approach using grammars and dictionaries. This approach has the advantage that the entities found can be directly related to a given grammar or dictionary, which allows the type of an entity to be known and, if an entity is misannotated, indicates which resource should be corrected. As recognition is driven by what is expected, if spelling errors occur, they can be corrected. Correcting such errors is highly useful when attempting to lookup an entity in a database or, in the case of chemical names, converting them to structures. Our system uses a mixture of expertly curated grammars and dictionaries, as well as dictionaries automatically derived from public resources. We show that the heuristics developed to filter our dictionary of trivial chemical names (from PubChem) yields a better performing dictionary than the previously published Jochem dictionary. Our final system performs post-processing steps to modify the boundaries of entities and to detect abbreviations. These steps are shown to significantly improve performance (2.6% and 4.0% F1-score respectively). Our complete system, with incremental post-BioCreative workshop improvements, achieves 89.9% precision and 85.4% recall (87.6% F1-score) on the CHEMDNER test set. Grammar and dictionary approaches can produce results at least as good as the current state of the art in machine learning approaches. While machine learning approaches are commonly thought of as "black box" systems, our approach directly links the output entities to the input dictionaries and grammars. Our approach also allows correction of errors in detected entities, which can assist with entity resolution.

  6. LeadMine: a grammar and dictionary driven approach to entity recognition

    PubMed Central

    2015-01-01

    Background Chemical entity recognition has traditionally been performed by machine learning approaches. Here we describe an approach using grammars and dictionaries. This approach has the advantage that the entities found can be directly related to a given grammar or dictionary, which allows the type of an entity to be known and, if an entity is misannotated, indicates which resource should be corrected. As recognition is driven by what is expected, if spelling errors occur, they can be corrected. Correcting such errors is highly useful when attempting to lookup an entity in a database or, in the case of chemical names, converting them to structures. Results Our system uses a mixture of expertly curated grammars and dictionaries, as well as dictionaries automatically derived from public resources. We show that the heuristics developed to filter our dictionary of trivial chemical names (from PubChem) yields a better performing dictionary than the previously published Jochem dictionary. Our final system performs post-processing steps to modify the boundaries of entities and to detect abbreviations. These steps are shown to significantly improve performance (2.6% and 4.0% F1-score respectively). Our complete system, with incremental post-BioCreative workshop improvements, achieves 89.9% precision and 85.4% recall (87.6% F1-score) on the CHEMDNER test set. Conclusions Grammar and dictionary approaches can produce results at least as good as the current state of the art in machine learning approaches. While machine learning approaches are commonly thought of as "black box" systems, our approach directly links the output entities to the input dictionaries and grammars. Our approach also allows correction of errors in detected entities, which can assist with entity resolution. PMID:25810776

  7. Efficient Sum of Outer Products Dictionary Learning (SOUP-DIL) and Its Application to Inverse Problems.

    PubMed

    Ravishankar, Saiprasad; Nadakuditi, Raj Rao; Fessler, Jeffrey A

    2017-12-01

    The sparsity of signals in a transform domain or dictionary has been exploited in applications such as compression, denoising and inverse problems. More recently, data-driven adaptation of synthesis dictionaries has shown promise compared to analytical dictionary models. However, dictionary learning problems are typically non-convex and NP-hard, and the usual alternating minimization approaches for these problems are often computationally expensive, with the computations dominated by the NP-hard synthesis sparse coding step. This paper exploits the ideas that drive algorithms such as K-SVD, and investigates in detail efficient methods for aggregate sparsity penalized dictionary learning by first approximating the data with a sum of sparse rank-one matrices (outer products) and then using a block coordinate descent approach to estimate the unknowns. The resulting block coordinate descent algorithms involve efficient closed-form solutions. Furthermore, we consider the problem of dictionary-blind image reconstruction, and propose novel and efficient algorithms for adaptive image reconstruction using block coordinate descent and sum of outer products methodologies. We provide a convergence study of the algorithms for dictionary learning and dictionary-blind image reconstruction. Our numerical experiments show the promising performance and speedups provided by the proposed methods over previous schemes in sparse data representation and compressed sensing-based image reconstruction.

  8. Online Multi-Modal Robust Non-Negative Dictionary Learning for Visual Tracking

    PubMed Central

    Zhang, Xiang; Guan, Naiyang; Tao, Dacheng; Qiu, Xiaogang; Luo, Zhigang

    2015-01-01

    Dictionary learning is a method of acquiring a collection of atoms for subsequent signal representation. Due to its excellent representation ability, dictionary learning has been widely applied in multimedia and computer vision. However, conventional dictionary learning algorithms fail to deal with multi-modal datasets. In this paper, we propose an online multi-modal robust non-negative dictionary learning (OMRNDL) algorithm to overcome this deficiency. Notably, OMRNDL casts visual tracking as a dictionary learning problem under the particle filter framework and captures the intrinsic knowledge about the target from multiple visual modalities, e.g., pixel intensity and texture information. To this end, OMRNDL adaptively learns an individual dictionary, i.e., template, for each modality from available frames, and then represents new particles over all the learned dictionaries by minimizing the fitting loss of data based on M-estimation. The resultant representation coefficient can be viewed as the common semantic representation of particles across multiple modalities, and can be utilized to track the target. OMRNDL incrementally learns the dictionary and the coefficient of each particle by using multiplicative update rules to respectively guarantee their non-negativity constraints. Experimental results on a popular challenging video benchmark validate the effectiveness of OMRNDL for visual tracking in both quantity and quality. PMID:25961715

  9. Efficient Sum of Outer Products Dictionary Learning (SOUP-DIL) and Its Application to Inverse Problems

    PubMed Central

    Ravishankar, Saiprasad; Nadakuditi, Raj Rao; Fessler, Jeffrey A.

    2017-01-01

    The sparsity of signals in a transform domain or dictionary has been exploited in applications such as compression, denoising and inverse problems. More recently, data-driven adaptation of synthesis dictionaries has shown promise compared to analytical dictionary models. However, dictionary learning problems are typically non-convex and NP-hard, and the usual alternating minimization approaches for these problems are often computationally expensive, with the computations dominated by the NP-hard synthesis sparse coding step. This paper exploits the ideas that drive algorithms such as K-SVD, and investigates in detail efficient methods for aggregate sparsity penalized dictionary learning by first approximating the data with a sum of sparse rank-one matrices (outer products) and then using a block coordinate descent approach to estimate the unknowns. The resulting block coordinate descent algorithms involve efficient closed-form solutions. Furthermore, we consider the problem of dictionary-blind image reconstruction, and propose novel and efficient algorithms for adaptive image reconstruction using block coordinate descent and sum of outer products methodologies. We provide a convergence study of the algorithms for dictionary learning and dictionary-blind image reconstruction. Our numerical experiments show the promising performance and speedups provided by the proposed methods over previous schemes in sparse data representation and compressed sensing-based image reconstruction. PMID:29376111

  10. Online multi-modal robust non-negative dictionary learning for visual tracking.

    PubMed

    Zhang, Xiang; Guan, Naiyang; Tao, Dacheng; Qiu, Xiaogang; Luo, Zhigang

    2015-01-01

    Dictionary learning is a method of acquiring a collection of atoms for subsequent signal representation. Due to its excellent representation ability, dictionary learning has been widely applied in multimedia and computer vision. However, conventional dictionary learning algorithms fail to deal with multi-modal datasets. In this paper, we propose an online multi-modal robust non-negative dictionary learning (OMRNDL) algorithm to overcome this deficiency. Notably, OMRNDL casts visual tracking as a dictionary learning problem under the particle filter framework and captures the intrinsic knowledge about the target from multiple visual modalities, e.g., pixel intensity and texture information. To this end, OMRNDL adaptively learns an individual dictionary, i.e., template, for each modality from available frames, and then represents new particles over all the learned dictionaries by minimizing the fitting loss of data based on M-estimation. The resultant representation coefficient can be viewed as the common semantic representation of particles across multiple modalities, and can be utilized to track the target. OMRNDL incrementally learns the dictionary and the coefficient of each particle by using multiplicative update rules to respectively guarantee their non-negativity constraints. Experimental results on a popular challenging video benchmark validate the effectiveness of OMRNDL for visual tracking in both quantity and quality.

  11. Performance analysis of model based iterative reconstruction with dictionary learning in transportation security CT

    NASA Astrophysics Data System (ADS)

    Haneda, Eri; Luo, Jiajia; Can, Ali; Ramani, Sathish; Fu, Lin; De Man, Bruno

    2016-05-01

    In this study, we implement and compare model based iterative reconstruction (MBIR) with dictionary learning (DL) over MBIR with pairwise pixel-difference regularization, in the context of transportation security. DL is a technique of sparse signal representation using an over complete dictionary which has provided promising results in image processing applications including denoising,1 as well as medical CT reconstruction.2 It has been previously reported that DL produces promising results in terms of noise reduction and preservation of structural details, especially for low dose and few-view CT acquisitions.2 A distinguishing feature of transportation security CT is that scanned baggage may contain items with a wide range of material densities. While medical CT typically scans soft tissues, blood with and without contrast agents, and bones, luggage typically contains more high density materials (i.e. metals and glass), which can produce severe distortions such as metal streaking artifacts. Important factors of security CT are the emphasis on image quality such as resolution, contrast, noise level, and CT number accuracy for target detection. While MBIR has shown exemplary performance in the trade-off of noise reduction and resolution preservation, we demonstrate that DL may further improve this trade-off. In this study, we used the KSVD-based DL3 combined with the MBIR cost-minimization framework and compared results to Filtered Back Projection (FBP) and MBIR with pairwise pixel-difference regularization. We performed a parameter analysis to show the image quality impact of each parameter. We also investigated few-view CT acquisitions where DL can show an additional advantage relative to pairwise pixel difference regularization.

  12. Dictionary-driven protein annotation

    PubMed Central

    Rigoutsos, Isidore; Huynh, Tien; Floratos, Aris; Parida, Laxmi; Platt, Daniel

    2002-01-01

    Computational methods seeking to automatically determine the properties (functional, structural, physicochemical, etc.) of a protein directly from the sequence have long been the focus of numerous research groups. With the advent of advanced sequencing methods and systems, the number of amino acid sequences that are being deposited in the public databases has been increasing steadily. This has in turn generated a renewed demand for automated approaches that can annotate individual sequences and complete genomes quickly, exhaustively and objectively. In this paper, we present one such approach that is centered around and exploits the Bio-Dictionary, a collection of amino acid patterns that completely covers the natural sequence space and can capture functional and structural signals that have been reused during evolution, within and across protein families. Our annotation approach also makes use of a weighted, position-specific scoring scheme that is unaffected by the over-representation of well-conserved proteins and protein fragments in the databases used. For a given query sequence, the method permits one to determine, in a single pass, the following: local and global similarities between the query and any protein already present in a public database; the likeness of the query to all available archaeal/bacterial/eukaryotic/viral sequences in the database as a function of amino acid position within the query; the character of secondary structure of the query as a function of amino acid position within the query; the cytoplasmic, transmembrane or extracellular behavior of the query; the nature and position of binding domains, active sites, post-translationally modified sites, signal peptides, etc. In terms of performance, the proposed method is exhaustive, objective and allows for the rapid annotation of individual sequences and full genomes. Annotation examples are presented and discussed in Results, including individual queries and complete genomes that were released publicly after we built the Bio-Dictionary that is used in our experiments. Finally, we have computed the annotations of more than 70 complete genomes and made them available on the World Wide Web at http://cbcsrv.watson.ibm.com/Annotations/. PMID:12202776

  13. Dictionary of Cotton

    USDA-ARS?s Scientific Manuscript database

    The Dictionary of Cotton has over 2,000 terms and definitions that were compiled by 33 researchers. It reflects the ongoing commitment of the International Cotton Advisory Committee, through its Technical Information Section, to the spread of knowledge about cotton to all those who have an interest ...

  14. Personalized Age Progression with Bi-Level Aging Dictionary Learning.

    PubMed

    Shu, Xiangbo; Tang, Jinhui; Li, Zechao; Lai, Hanjiang; Zhang, Liyan; Yan, Shuicheng

    2018-04-01

    Age progression is defined as aesthetically re-rendering the aging face at any future age for an individual face. In this work, we aim to automatically render aging faces in a personalized way. Basically, for each age group, we learn an aging dictionary to reveal its aging characteristics (e.g., wrinkles), where the dictionary bases corresponding to the same index yet from two neighboring aging dictionaries form a particular aging pattern cross these two age groups, and a linear combination of all these patterns expresses a particular personalized aging process. Moreover, two factors are taken into consideration in the dictionary learning process. First, beyond the aging dictionaries, each person may have extra personalized facial characteristics, e.g., mole, which are invariant in the aging process. Second, it is challenging or even impossible to collect faces of all age groups for a particular person, yet much easier and more practical to get face pairs from neighboring age groups. To this end, we propose a novel Bi-level Dictionary Learning based Personalized Age Progression (BDL-PAP) method. Here, bi-level dictionary learning is formulated to learn the aging dictionaries based on face pairs from neighboring age groups. Extensive experiments well demonstrate the advantages of the proposed BDL-PAP over other state-of-the-arts in term of personalized age progression, as well as the performance gain for cross-age face verification by synthesizing aging faces.

  15. Segmentation of Thalamus from MR images via Task-Driven Dictionary Learning.

    PubMed

    Liu, Luoluo; Glaister, Jeffrey; Sun, Xiaoxia; Carass, Aaron; Tran, Trac D; Prince, Jerry L

    2016-02-27

    Automatic thalamus segmentation is useful to track changes in thalamic volume over time. In this work, we introduce a task-driven dictionary learning framework to find the optimal dictionary given a set of eleven features obtained from T1-weighted MRI and diffusion tensor imaging. In this dictionary learning framework, a linear classifier is designed concurrently to classify voxels as belonging to the thalamus or non-thalamus class. Morphological post-processing is applied to produce the final thalamus segmentation. Due to the uneven size of the training data samples for the non-thalamus and thalamus classes, a non-uniform sampling scheme is proposed to train the classifier to better discriminate between the two classes around the boundary of the thalamus. Experiments are conducted on data collected from 22 subjects with manually delineated ground truth. The experimental results are promising in terms of improvements in the Dice coefficient of the thalamus segmentation over state-of-the-art atlas-based thalamus segmentation algorithms.

  16. CUHK Papers in Linguistics, Number 4.

    ERIC Educational Resources Information Center

    Tang, Gladys, Ed.

    1993-01-01

    Papers in this issue include the following: "Code-Mixing in Hongkong Cantonese-English Bilinguals: Constraints and Processes" (Brian Chan Hok-shing); "Information on Quantifiers and Argument Structure in English Learner's Dictionaries" (Thomas Hun-tak Lee); "Systematic Variability: In Search of a Linguistic…

  17. Texture Mixing via Universal Simulation

    DTIC Science & Technology

    2005-08-01

    classes and universal simulation. Based on the well-known Lempel and Ziv (LZ) universal compression scheme, the universal type class of a one...length that produce the same tree (dictionary) under the Lempel - Ziv (LZ) incre- mental parsing defined in the well-known LZ78 universal compression ...the well known Lempel - Ziv parsing algorithm . The goal is not just to synthesize mixed textures, but to understand what texture is. We are currently

  18. Analog system for computing sparse codes

    DOEpatents

    Rozell, Christopher John; Johnson, Don Herrick; Baraniuk, Richard Gordon; Olshausen, Bruno A.; Ortman, Robert Lowell

    2010-08-24

    A parallel dynamical system for computing sparse representations of data, i.e., where the data can be fully represented in terms of a small number of non-zero code elements, and for reconstructing compressively sensed images. The system is based on the principles of thresholding and local competition that solves a family of sparse approximation problems corresponding to various sparsity metrics. The system utilizes Locally Competitive Algorithms (LCAs), nodes in a population continually compete with neighboring units using (usually one-way) lateral inhibition to calculate coefficients representing an input in an over complete dictionary.

  19. Understanding data requirements of retrospective studies.

    PubMed

    Shenvi, Edna C; Meeker, Daniella; Boxwala, Aziz A

    2015-01-01

    Usage of data from electronic health records (EHRs) in clinical research is increasing, but there is little empirical knowledge of the data needed to support multiple types of research these sources support. This study seeks to characterize the types and patterns of data usage from EHRs for clinical research. We analyzed the data requirements of over 100 retrospective studies by mapping the selection criteria and study variables to data elements of two standard data dictionaries, one from the healthcare domain and the other from the clinical research domain. We also contacted study authors to validate our results. The majority of variables mapped to one or to both of the two dictionaries. Studies used an average of 4.46 (range 1-12) data element types in the selection criteria and 6.44 (range 1-15) in the study variables. The most frequently used items (e.g., procedure, condition, medication) are often available in coded form in EHRs. Study criteria were frequently complex, with 49 of 104 studies involving relationships between data elements and 22 of the studies using aggregate operations for data variables. Author responses supported these findings. The high proportion of mapped data elements demonstrates the significant potential for clinical data warehousing to facilitate clinical research. Unmapped data elements illustrate the difficulty in developing a complete data dictionary. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  20. Unsupervised method for automatic construction of a disease dictionary from a large free text collection.

    PubMed

    Xu, Rong; Supekar, Kaustubh; Morgan, Alex; Das, Amar; Garber, Alan

    2008-11-06

    Concept specific lexicons (e.g. diseases, drugs, anatomy) are a critical source of background knowledge for many medical language-processing systems. However, the rapid pace of biomedical research and the lack of constraints on usage ensure that such dictionaries are incomplete. Focusing on disease terminology, we have developed an automated, unsupervised, iterative pattern learning approach for constructing a comprehensive medical dictionary of disease terms from randomized clinical trial (RCT) abstracts, and we compared different ranking methods for automatically extracting con-textual patterns and concept terms. When used to identify disease concepts from 100 randomly chosen, manually annotated clinical abstracts, our disease dictionary shows significant performance improvement (F1 increased by 35-88%) over available, manually created disease terminologies.

  1. Unsupervised Method for Automatic Construction of a Disease Dictionary from a Large Free Text Collection

    PubMed Central

    Xu, Rong; Supekar, Kaustubh; Morgan, Alex; Das, Amar; Garber, Alan

    2008-01-01

    Concept specific lexicons (e.g. diseases, drugs, anatomy) are a critical source of background knowledge for many medical language-processing systems. However, the rapid pace of biomedical research and the lack of constraints on usage ensure that such dictionaries are incomplete. Focusing on disease terminology, we have developed an automated, unsupervised, iterative pattern learning approach for constructing a comprehensive medical dictionary of disease terms from randomized clinical trial (RCT) abstracts, and we compared different ranking methods for automatically extracting contextual patterns and concept terms. When used to identify disease concepts from 100 randomly chosen, manually annotated clinical abstracts, our disease dictionary shows significant performance improvement (F1 increased by 35–88%) over available, manually created disease terminologies. PMID:18999169

  2. Change detection and change monitoring of natural and man-made features in multispectral and hyperspectral satellite imagery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moody, Daniela Irina

    An approach for land cover classification, seasonal and yearly change detection and monitoring, and identification of changes in man-made features may use a clustering of sparse approximations (CoSA) on sparse representations in learned dictionaries. A Hebbian learning rule may be used to build multispectral or hyperspectral, multiresolution dictionaries that are adapted to regional satellite image data. Sparse image representations of pixel patches over the learned dictionaries may be used to perform unsupervised k-means clustering into land cover categories. The clustering process behaves as a classifier in detecting real variability. This approach may combine spectral and spatial textural characteristics to detectmore » geologic, vegetative, hydrologic, and man-made features, as well as changes in these features over time.« less

  3. Histopathological Image Classification using Discriminative Feature-oriented Dictionary Learning

    PubMed Central

    Vu, Tiep Huu; Mousavi, Hojjat Seyed; Monga, Vishal; Rao, Ganesh; Rao, UK Arvind

    2016-01-01

    In histopathological image analysis, feature extraction for classification is a challenging task due to the diversity of histology features suitable for each problem as well as presence of rich geometrical structures. In this paper, we propose an automatic feature discovery framework via learning class-specific dictionaries and present a low-complexity method for classification and disease grading in histopathology. Essentially, our Discriminative Feature-oriented Dictionary Learning (DFDL) method learns class-specific dictionaries such that under a sparsity constraint, the learned dictionaries allow representing a new image sample parsimoniously via the dictionary corresponding to the class identity of the sample. At the same time, the dictionary is designed to be poorly capable of representing samples from other classes. Experiments on three challenging real-world image databases: 1) histopathological images of intraductal breast lesions, 2) mammalian kidney, lung and spleen images provided by the Animal Diagnostics Lab (ADL) at Pennsylvania State University, and 3) brain tumor images from The Cancer Genome Atlas (TCGA) database, reveal the merits of our proposal over state-of-the-art alternatives. Moreover, we demonstrate that DFDL exhibits a more graceful decay in classification accuracy against the number of training images which is highly desirable in practice where generous training is often not available. PMID:26513781

  4. SVD compression for magnetic resonance fingerprinting in the time domain.

    PubMed

    McGivney, Debra F; Pierre, Eric; Ma, Dan; Jiang, Yun; Saybasili, Haris; Gulani, Vikas; Griswold, Mark A

    2014-12-01

    Magnetic resonance (MR) fingerprinting is a technique for acquiring and processing MR data that simultaneously provides quantitative maps of different tissue parameters through a pattern recognition algorithm. A predefined dictionary models the possible signal evolutions simulated using the Bloch equations with different combinations of various MR parameters and pattern recognition is completed by computing the inner product between the observed signal and each of the predicted signals within the dictionary. Though this matching algorithm has been shown to accurately predict the MR parameters of interest, one desires a more efficient method to obtain the quantitative images. We propose to compress the dictionary using the singular value decomposition, which will provide a low-rank approximation. By compressing the size of the dictionary in the time domain, we are able to speed up the pattern recognition algorithm, by a factor of between 3.4-4.8, without sacrificing the high signal-to-noise ratio of the original scheme presented previously.

  5. SVD Compression for Magnetic Resonance Fingerprinting in the Time Domain

    PubMed Central

    McGivney, Debra F.; Pierre, Eric; Ma, Dan; Jiang, Yun; Saybasili, Haris; Gulani, Vikas; Griswold, Mark A.

    2016-01-01

    Magnetic resonance fingerprinting is a technique for acquiring and processing MR data that simultaneously provides quantitative maps of different tissue parameters through a pattern recognition algorithm. A predefined dictionary models the possible signal evolutions simulated using the Bloch equations with different combinations of various MR parameters and pattern recognition is completed by computing the inner product between the observed signal and each of the predicted signals within the dictionary. Though this matching algorithm has been shown to accurately predict the MR parameters of interest, one desires a more efficient method to obtain the quantitative images. We propose to compress the dictionary using the singular value decomposition (SVD), which will provide a low-rank approximation. By compressing the size of the dictionary in the time domain, we are able to speed up the pattern recognition algorithm, by a factor of between 3.4-4.8, without sacrificing the high signal-to-noise ratio of the original scheme presented previously. PMID:25029380

  6. A Concise Dictionary of Minnesota Ojibwe.

    ERIC Educational Resources Information Center

    Nichols, John D.; Nyholm, Earl

    The dictionary of the Ojibwa or Chippewa language represents the speech of the Mille Lacs Band of Minnesota and contains over 7,000 Ojibwa terms. Each entry gives information on the word stem, grammatical classification, English gloss, form variations, and references to alternate forms. An introductory section describes the entry format and use,…

  7. Facilitating Learner Autonomy: Reading and Effective Dictionary Use for Lexical Development

    ERIC Educational Resources Information Center

    Lin, Debbita Tan Ai; Pandian, Ambigapathy; Jaganathan, Paramaswari

    2017-01-01

    Effective dictionary use facilitates reading and subsequently, vocabulary knowledge development. Reading, especially extensive reading, has time and again been proven to be highly effective for both receptive and productive lexical development. Possessing control over a large vocabulary is essential for language competence--be it L1, L2, or L3.…

  8. Statutory Interpretation: General Principles and Recent Trends

    DTIC Science & Technology

    2006-03-30

    although the Court’s pathway through the mix is often not clearly foreseeable, an understanding of interpretational possibilities may nonetheless lessen...dictionary definitions to interpret the word “ marketing ” as used in the Plant Variety Protection Act,24 and the word “principal” as used to modify a...exclusive”conditions that can rule out mixing and matching. United States v. Williams, 326 F.3d 535, 541 (4th Cir. 2003) (“a crime may qualify as a

  9. Natural Language Processing Systems Evaluation Workshop Held in Berkely, California on 18 June 1991

    DTIC Science & Technology

    1991-12-01

    re~arded as -a fairly complete dictionary contains about 18,000 itemsw at soluition to the domain-restricted task at tzanlating present, and will be... dictionary access and so on, with an article. Unfortunately, the Weidner system did but as time goes on, one might imagine functionality not know that...superfast type. looped tht it A31l be built with taste by peo. writer ought to be possible in the monolingual case pie who understand languages and

  10. Historical Astrolexicography and Old Publications

    NASA Astrophysics Data System (ADS)

    Mahoney, Terry J.

    I describe how the principles of lexicography have been applied in limited ways in astronomy and look at the revision work under way for the third edition of the Oxford English Dictionary, which, when completed, will contain the widest and most detailed coverage of the astronomical lexicon in the English language. Finally, I argue the need for a dedicated historical dictionary of astronomy based rigorously on a corpus of quotations from sources published in English from the beginnings of written English to the present day.

  11. MR PROSTATE SEGMENTATION VIA DISTRIBUTED DISCRIMINATIVE DICTIONARY (DDD) LEARNING.

    PubMed

    Guo, Yanrong; Zhan, Yiqiang; Gao, Yaozong; Jiang, Jianguo; Shen, Dinggang

    2013-01-01

    Segmenting prostate from MR images is important yet challenging. Due to non-Gaussian distribution of prostate appearances in MR images, the popular active appearance model (AAM) has its limited performance. Although the newly developed sparse dictionary learning method[1, 2] can model the image appearance in a non-parametric fashion, the learned dictionaries still lack the discriminative power between prostate and non-prostate tissues, which is critical for accurate prostate segmentation. In this paper, we propose to integrate deformable model with a novel learning scheme, namely the Distributed Discriminative Dictionary ( DDD ) learning, which can capture image appearance in a non-parametric and discriminative fashion. In particular, three strategies are designed to boost the tissue discriminative power of DDD. First , minimum Redundancy Maximum Relevance (mRMR) feature selection is performed to constrain the dictionary learning in a discriminative feature space. Second , linear discriminant analysis (LDA) is employed to assemble residuals from different dictionaries for optimal separation between prostate and non-prostate tissues. Third , instead of learning the global dictionaries, we learn a set of local dictionaries for the local regions (each with small appearance variations) along prostate boundary, thus achieving better tissue differentiation locally. In the application stage, DDDs will provide the appearance cues to robustly drive the deformable model onto the prostate boundary. Experiments on 50 MR prostate images show that our method can yield a Dice Ratio of 88% compared to the manual segmentations, and have 7% improvement over the conventional AAM.

  12. Image fusion based on Bandelet and sparse representation

    NASA Astrophysics Data System (ADS)

    Zhang, Jiuxing; Zhang, Wei; Li, Xuzhi

    2018-04-01

    Bandelet transform could acquire geometric regular direction and geometric flow, sparse representation could represent signals with as little as possible atoms on over-complete dictionary, both of which could be used to image fusion. Therefore, a new fusion method is proposed based on Bandelet and Sparse Representation, to fuse Bandelet coefficients of multi-source images and obtain high quality fusion effects. The test are performed on remote sensing images and simulated multi-focus images, experimental results show that the performance of new method is better than tested methods according to objective evaluation indexes and subjective visual effects.

  13. Reducible dictionaries for single image super-resolution based on patch matching and mean shifting

    NASA Astrophysics Data System (ADS)

    Rasti, Pejman; Nasrollahi, Kamal; Orlova, Olga; Tamberg, Gert; Moeslund, Thomas B.; Anbarjafari, Gholamreza

    2017-03-01

    A single-image super-resolution (SR) method is proposed. The proposed method uses a generated dictionary from pairs of high resolution (HR) images and their corresponding low resolution (LR) representations. First, HR images and the corresponding LR ones are divided into patches of HR and LR, respectively, and then they are collected into separate dictionaries. Afterward, when performing SR, the distance between every patch of the input LR image and those of available LR patches in the LR dictionary is calculated. The minimum distance between the input LR patch and those in the LR dictionary is taken, and its counterpart from the HR dictionary is passed through an illumination enhancement process. By this technique, the noticeable change of illumination between neighbor patches in the super-resolved image is significantly reduced. The enhanced HR patch represents the HR patch of the super-resolved image. Finally, to remove the blocking effect caused by merging the patches, an average of the obtained HR image and the interpolated image obtained using bicubic interpolation is calculated. The quantitative and qualitative analyses show the superiority of the proposed technique over the conventional and state-of-art methods.

  14. Progressive Dictionary Learning with Hierarchical Predictive Structure for Scalable Video Coding.

    PubMed

    Dai, Wenrui; Shen, Yangmei; Xiong, Hongkai; Jiang, Xiaoqian; Zou, Junni; Taubman, David

    2017-04-12

    Dictionary learning has emerged as a promising alternative to the conventional hybrid coding framework. However, the rigid structure of sequential training and prediction degrades its performance in scalable video coding. This paper proposes a progressive dictionary learning framework with hierarchical predictive structure for scalable video coding, especially in low bitrate region. For pyramidal layers, sparse representation based on spatio-temporal dictionary is adopted to improve the coding efficiency of enhancement layers (ELs) with a guarantee of reconstruction performance. The overcomplete dictionary is trained to adaptively capture local structures along motion trajectories as well as exploit the correlations between neighboring layers of resolutions. Furthermore, progressive dictionary learning is developed to enable the scalability in temporal domain and restrict the error propagation in a close-loop predictor. Under the hierarchical predictive structure, online learning is leveraged to guarantee the training and prediction performance with an improved convergence rate. To accommodate with the stateof- the-art scalable extension of H.264/AVC and latest HEVC, standardized codec cores are utilized to encode the base and enhancement layers. Experimental results show that the proposed method outperforms the latest SHVC and HEVC simulcast over extensive test sequences with various resolutions.

  15. Russian-English Dictionary of Cybernetics and Computer Technology.

    ERIC Educational Resources Information Center

    Holland, Wade B.

    This work contains over 5,350 terms which have special or unique definition when applied in a cybernetic context. Corrections and improvements to the first edition of the dictionary have been made in this second edition. Entries are made for terms encountered in the Soviet cybernetic literature, without any attempt to define the field or to…

  16. Semi-Supervised Tripled Dictionary Learning for Standard-dose PET Image Prediction using Low-dose PET and Multimodal MRI

    PubMed Central

    Wang, Yan; Ma, Guangkai; An, Le; Shi, Feng; Zhang, Pei; Lalush, David S.; Wu, Xi; Pu, Yifei; Zhou, Jiliu; Shen, Dinggang

    2017-01-01

    Objective To obtain high-quality positron emission tomography (PET) image with low-dose tracer injection, this study attempts to predict the standard-dose PET (S-PET) image from both its low-dose PET (L-PET) counterpart and corresponding magnetic resonance imaging (MRI). Methods It was achieved by patch-based sparse representation (SR), using the training samples with a complete set of MRI, L-PET and S-PET modalities for dictionary construction. However, the number of training samples with complete modalities is often limited. In practice, many samples generally have incomplete modalities (i.e., with one or two missing modalities) that thus cannot be used in the prediction process. In light of this, we develop a semi-supervised tripled dictionary learning (SSTDL) method for S-PET image prediction, which can utilize not only the samples with complete modalities (called complete samples) but also the samples with incomplete modalities (called incomplete samples), to take advantage of the large number of available training samples and thus further improve the prediction performance. Results Validation was done on a real human brain dataset consisting of 18 subjects, and the results show that our method is superior to the SR and other baseline methods. Conclusion This work proposed a new S-PET prediction method, which can significantly improve the PET image quality with low-dose injection. Significance The proposed method is favorable in clinical application since it can decrease the potential radiation risk for patients. PMID:27187939

  17. Exploiting Attribute Correlations: A Novel Trace Lasso-Based Weakly Supervised Dictionary Learning Method.

    PubMed

    Wu, Lin; Wang, Yang; Pan, Shirui

    2017-12-01

    It is now well established that sparse representation models are working effectively for many visual recognition tasks, and have pushed forward the success of dictionary learning therein. Recent studies over dictionary learning focus on learning discriminative atoms instead of purely reconstructive ones. However, the existence of intraclass diversities (i.e., data objects within the same category but exhibit large visual dissimilarities), and interclass similarities (i.e., data objects from distinct classes but share much visual similarities), makes it challenging to learn effective recognition models. To this end, a large number of labeled data objects are required to learn models which can effectively characterize these subtle differences. However, labeled data objects are always limited to access, committing it difficult to learn a monolithic dictionary that can be discriminative enough. To address the above limitations, in this paper, we propose a weakly-supervised dictionary learning method to automatically learn a discriminative dictionary by fully exploiting visual attribute correlations rather than label priors. In particular, the intrinsic attribute correlations are deployed as a critical cue to guide the process of object categorization, and then a set of subdictionaries are jointly learned with respect to each category. The resulting dictionary is highly discriminative and leads to intraclass diversity aware sparse representations. Extensive experiments on image classification and object recognition are conducted to show the effectiveness of our approach.

  18. CTEPP STANDARD OPERATING PROCEDURE FOR PROCESSING COMPLETED DATA FORMS (SOP-4.10)

    EPA Science Inventory

    This SOP describes the methods for processing completed data forms. Key components of the SOP include (1) field editing, (2) data form Chain-of-Custody, (3) data processing verification, (4) coding, (5) data entry, (6) programming checks, (7) preparation of data dictionaries, cod...

  19. Concepts of Mathematics for Students of Physics and Engineering: A Dictionary

    NASA Technical Reports Server (NTRS)

    Kolecki, Joseph C.

    2003-01-01

    A physicist with an engineering background, the author presents a mathematical dictionary containing material encountered over many years of study and professional work at NASA. This work is a compilation of the author's experience and progress in the field of study represented and consists of personal notes and observations that can be used by students in physics and engineering.

  20. Combining DCQGMP-Based Sparse Decomposition and MPDR Beamformer for Multi-Type Interferences Mitigation for GNSS Receivers.

    PubMed

    Guo, Qiang; Qi, Liangang

    2017-04-10

    In the coexistence of multiple types of interfering signals, the performance of interference suppression methods based on time and frequency domains is degraded seriously, and the technique using an antenna array requires a large enough size and huge hardware costs. To combat multi-type interferences better for GNSS receivers, this paper proposes a cascaded multi-type interferences mitigation method combining improved double chain quantum genetic matching pursuit (DCQGMP)-based sparse decomposition and an MPDR beamformer. The key idea behind the proposed method is that the multiple types of interfering signals can be excised by taking advantage of their sparse features in different domains. In the first stage, the single-tone (multi-tone) and linear chirp interfering signals are canceled by sparse decomposition according to their sparsity in the over-complete dictionary. In order to improve the timeliness of matching pursuit (MP)-based sparse decomposition, a DCQGMP is introduced by combining an improved double chain quantum genetic algorithm (DCQGA) and the MP algorithm, and the DCQGMP algorithm is extended to handle the multi-channel signals according to the correlation among the signals in different channels. In the second stage, the minimum power distortionless response (MPDR) beamformer is utilized to nullify the residuary interferences (e.g., wideband Gaussian noise interferences). Several simulation results show that the proposed method can not only improve the interference mitigation degree of freedom (DoF) of the array antenna, but also effectively deal with the interference arriving from the same direction with the GNSS signal, which can be sparse represented in the over-complete dictionary. Moreover, it does not bring serious distortions into the navigation signal.

  1. Combining DCQGMP-Based Sparse Decomposition and MPDR Beamformer for Multi-Type Interferences Mitigation for GNSS Receivers

    PubMed Central

    Guo, Qiang; Qi, Liangang

    2017-01-01

    In the coexistence of multiple types of interfering signals, the performance of interference suppression methods based on time and frequency domains is degraded seriously, and the technique using an antenna array requires a large enough size and huge hardware costs. To combat multi-type interferences better for GNSS receivers, this paper proposes a cascaded multi-type interferences mitigation method combining improved double chain quantum genetic matching pursuit (DCQGMP)-based sparse decomposition and an MPDR beamformer. The key idea behind the proposed method is that the multiple types of interfering signals can be excised by taking advantage of their sparse features in different domains. In the first stage, the single-tone (multi-tone) and linear chirp interfering signals are canceled by sparse decomposition according to their sparsity in the over-complete dictionary. In order to improve the timeliness of matching pursuit (MP)-based sparse decomposition, a DCQGMP is introduced by combining an improved double chain quantum genetic algorithm (DCQGA) and the MP algorithm, and the DCQGMP algorithm is extended to handle the multi-channel signals according to the correlation among the signals in different channels. In the second stage, the minimum power distortionless response (MPDR) beamformer is utilized to nullify the residuary interferences (e.g., wideband Gaussian noise interferences). Several simulation results show that the proposed method can not only improve the interference mitigation degree of freedom (DoF) of the array antenna, but also effectively deal with the interference arriving from the same direction with the GNSS signal, which can be sparse represented in the over-complete dictionary. Moreover, it does not bring serious distortions into the navigation signal. PMID:28394290

  2. Implementation of a platform dedicated to the biomedical analysis terminologies management

    PubMed Central

    Cormont, Sylvie; Vandenbussche, Pierre-Yves; Buemi, Antoine; Delahousse, Jean; Lepage, Eric; Charlet, Jean

    2011-01-01

    Background and objectives. Assistance Publique - Hôpitaux de Paris (AP-HP) is implementing a new laboratory management system (LMS) common to the 12 hospital groups. First step to this process was to acquire a biological analysis dictionary. This dictionary is interfaced with the international nomenclature LOINC, and has been developed in collaboration with experts from all biological disciplines. In this paper we describe in three steps (modeling, data migration and integration/verification) the implementation of a platform for publishing and maintaining the AP-HP laboratory data dictionary (AnaBio). Material and Methods. Due to data complexity and volume, setting up a platform dedicated to the terminology management was a key requirement. This is an enhancement tackling identified weaknesses of previous spreadsheet tool. Our core model allows interoperability regarding data exchange standards and dictionary evolution. Results. We completed our goals within one year. In addition, structuring data representation has lead to a significant data quality improvement (impacting more than 10% of data). The platform is active in the 21 hospitals of the institution spread into 165 laboratories. PMID:22195205

  3. Automated Sentence Completion Scoring.

    ERIC Educational Resources Information Center

    Veldman, Donald J.

    A 62-item form of the sentence-completion technique requiring one-word responses was administered to 1718 undergraduates in teacher education. The data were punched on cards and lists of different responses were compiled. Responses indicating evasion, hostility, anxiety and depression were identified for each stem to form a scoring "dictionary." A…

  4. Kids' Medical Dictionary

    MedlinePlus

    ... Chronic Word! Cleft Lip Word! Cleft Palate Word! Cochlea Word! Complete Blood Count (CBC) Word! Cone Word! ... Word! Palpitations Word! Pancreas Word! Papillae Word! Peak Flow Meter Word! Pediatric Endocrinologist Word! Pediatrician Word! Peritonitis ...

  5. Automatic coding and selection of causes of death: an adaptation of Iris software for using in Brazil.

    PubMed

    Martins, Renata Cristófani; Buchalla, Cassia Maria

    2015-01-01

    To prepare a dictionary in Portuguese for using in Iris and to evaluate its completeness for coding causes of death. Iniatially, a dictionary with all illness and injuries was created based on the International Classification of Diseases - tenth revision (ICD-10) codes. This dictionary was based on two sources: the electronic file of ICD-10 volume 1 and the data from Thesaurus of the International Classification of Primary Care (ICPC-2). Then, a death certificate sample from the Program of Improvement of Mortality Information in São Paulo (PRO-AIM) was coded manually and by Iris version V4.0.34, and the causes of death were compared. Whenever Iris was not able to code the causes of death, adjustments were made in the dictionary. Iris was able to code all causes of death in 94.4% death certificates, but only 50.6% were directly coded, without adjustments. Among death certificates that the software was unable to fully code, 89.2% had a diagnosis of external causes (chapter XX of ICD-10). This group of causes of death showed less agreement when comparing the coding by Iris to the manual one. The software performed well, but it needs adjustments and improvement in its dictionary. In the upcoming versions of the software, its developers are trying to solve the external causes of death problem.

  6. Blind source separation by sparse decomposition

    NASA Astrophysics Data System (ADS)

    Zibulevsky, Michael; Pearlmutter, Barak A.

    2000-04-01

    The blind source separation problem is to extract the underlying source signals from a set of their linear mixtures, where the mixing matrix is unknown. This situation is common, eg in acoustics, radio, and medical signal processing. We exploit the property of the sources to have a sparse representation in a corresponding signal dictionary. Such a dictionary may consist of wavelets, wavelet packets, etc., or be obtained by learning from a given family of signals. Starting from the maximum a posteriori framework, which is applicable to the case of more sources than mixtures, we derive a few other categories of objective functions, which provide faster and more robust computations, when there are an equal number of sources and mixtures. Our experiments with artificial signals and with musical sounds demonstrate significantly better separation than other known techniques.

  7. A Sparse Bayesian Learning Algorithm for White Matter Parameter Estimation from Compressed Multi-shell Diffusion MRI.

    PubMed

    Pisharady, Pramod Kumar; Sotiropoulos, Stamatios N; Sapiro, Guillermo; Lenglet, Christophe

    2017-09-01

    We propose a sparse Bayesian learning algorithm for improved estimation of white matter fiber parameters from compressed (under-sampled q-space) multi-shell diffusion MRI data. The multi-shell data is represented in a dictionary form using a non-monoexponential decay model of diffusion, based on continuous gamma distribution of diffusivities. The fiber volume fractions with predefined orientations, which are the unknown parameters, form the dictionary weights. These unknown parameters are estimated with a linear un-mixing framework, using a sparse Bayesian learning algorithm. A localized learning of hyperparameters at each voxel and for each possible fiber orientations improves the parameter estimation. Our experiments using synthetic data from the ISBI 2012 HARDI reconstruction challenge and in-vivo data from the Human Connectome Project demonstrate the improvements.

  8. BaffleText: a Human Interactive Proof

    NASA Astrophysics Data System (ADS)

    Chew, Monica; Baird, Henry S.

    2003-01-01

    Internet services designed for human use are being abused by programs. We present a defense against such attacks in the form of a CAPTCHA (Completely Automatic Public Turing test to tell Computers and Humans Apart) that exploits the difference in ability between humans and machines in reading images of text. CAPTCHAs are a special case of 'human interactive proofs,' a broad class of security protocols that allow people to identify themselves over networks as members of given groups. We point out vulnerabilities of reading-based CAPTCHAs to dictionary and computer-vision attacks. We also draw on the literature on the psychophysics of human reading, which suggests fresh defenses available to CAPTCHAs. Motivated by these considerations, we propose BaffleText, a CAPTCHA which uses non-English pronounceable words to defend against dictionary attacks, and Gestalt-motivated image-masking degradations to defend against image restoration attacks. Experiments on human subjects confirm the human legibility and user acceptance of BaffleText images. We have found an image-complexity measure that correlates well with user acceptance and assists in engineering the generation of challenges to fit the ability gap. Recent computer-vision attacks, run independently by Mori and Jitendra, suggest that BaffleText is stronger than two existing CAPTCHAs.

  9. Sparsity-constrained PET image reconstruction with learned dictionaries

    NASA Astrophysics Data System (ADS)

    Tang, Jing; Yang, Bao; Wang, Yanhua; Ying, Leslie

    2016-09-01

    PET imaging plays an important role in scientific and clinical measurement of biochemical and physiological processes. Model-based PET image reconstruction such as the iterative expectation maximization algorithm seeking the maximum likelihood solution leads to increased noise. The maximum a posteriori (MAP) estimate removes divergence at higher iterations. However, a conventional smoothing prior or a total-variation (TV) prior in a MAP reconstruction algorithm causes over smoothing or blocky artifacts in the reconstructed images. We propose to use dictionary learning (DL) based sparse signal representation in the formation of the prior for MAP PET image reconstruction. The dictionary to sparsify the PET images in the reconstruction process is learned from various training images including the corresponding MR structural image and a self-created hollow sphere. Using simulated and patient brain PET data with corresponding MR images, we study the performance of the DL-MAP algorithm and compare it quantitatively with a conventional MAP algorithm, a TV-MAP algorithm, and a patch-based algorithm. The DL-MAP algorithm achieves improved bias and contrast (or regional mean values) at comparable noise to what the other MAP algorithms acquire. The dictionary learned from the hollow sphere leads to similar results as the dictionary learned from the corresponding MR image. Achieving robust performance in various noise-level simulation and patient studies, the DL-MAP algorithm with a general dictionary demonstrates its potential in quantitative PET imaging.

  10. Bayesian estimation of multicomponent relaxation parameters in magnetic resonance fingerprinting.

    PubMed

    McGivney, Debra; Deshmane, Anagha; Jiang, Yun; Ma, Dan; Badve, Chaitra; Sloan, Andrew; Gulani, Vikas; Griswold, Mark

    2018-07-01

    To estimate multiple components within a single voxel in magnetic resonance fingerprinting when the number and types of tissues comprising the voxel are not known a priori. Multiple tissue components within a single voxel are potentially separable with magnetic resonance fingerprinting as a result of differences in signal evolutions of each component. The Bayesian framework for inverse problems provides a natural and flexible setting for solving this problem when the tissue composition per voxel is unknown. Assuming that only a few entries from the dictionary contribute to a mixed signal, sparsity-promoting priors can be placed upon the solution. An iterative algorithm is applied to compute the maximum a posteriori estimator of the posterior probability density to determine the magnetic resonance fingerprinting dictionary entries that contribute most significantly to mixed or pure voxels. Simulation results show that the algorithm is robust in finding the component tissues of mixed voxels. Preliminary in vivo data confirm this result, and show good agreement in voxels containing pure tissue. The Bayesian framework and algorithm shown provide accurate solutions for the partial-volume problem in magnetic resonance fingerprinting. The flexibility of the method will allow further study into different priors and hyperpriors that can be applied in the model. Magn Reson Med 80:159-170, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  11. SDL: Saliency-Based Dictionary Learning Framework for Image Similarity.

    PubMed

    Sarkar, Rituparna; Acton, Scott T

    2018-02-01

    In image classification, obtaining adequate data to learn a robust classifier has often proven to be difficult in several scenarios. Classification of histological tissue images for health care analysis is a notable application in this context due to the necessity of surgery, biopsy or autopsy. To adequately exploit limited training data in classification, we propose a saliency guided dictionary learning method and subsequently an image similarity technique for histo-pathological image classification. Salient object detection from images aids in the identification of discriminative image features. We leverage the saliency values for the local image regions to learn a dictionary and respective sparse codes for an image, such that the more salient features are reconstructed with smaller error. The dictionary learned from an image gives a compact representation of the image itself and is capable of representing images with similar content, with comparable sparse codes. We employ this idea to design a similarity measure between a pair of images, where local image features of one image, are encoded with the dictionary learned from the other and vice versa. To effectively utilize the learned dictionary, we take into account the contribution of each dictionary atom in the sparse codes to generate a global image representation for image comparison. The efficacy of the proposed method was evaluated using three tissue data sets that consist of mammalian kidney, lung and spleen tissue, breast cancer, and colon cancer tissue images. From the experiments, we observe that our methods outperform the state of the art with an increase of 14.2% in the average classification accuracy over all data sets.

  12. Segmentation of thalamus from MR images via task-driven dictionary learning

    NASA Astrophysics Data System (ADS)

    Liu, Luoluo; Glaister, Jeffrey; Sun, Xiaoxia; Carass, Aaron; Tran, Trac D.; Prince, Jerry L.

    2016-03-01

    Automatic thalamus segmentation is useful to track changes in thalamic volume over time. In this work, we introduce a task-driven dictionary learning framework to find the optimal dictionary given a set of eleven features obtained from T1-weighted MRI and diffusion tensor imaging. In this dictionary learning framework, a linear classifier is designed concurrently to classify voxels as belonging to the thalamus or non-thalamus class. Morphological post-processing is applied to produce the final thalamus segmentation. Due to the uneven size of the training data samples for the non-thalamus and thalamus classes, a non-uniform sampling scheme is pro- posed to train the classifier to better discriminate between the two classes around the boundary of the thalamus. Experiments are conducted on data collected from 22 subjects with manually delineated ground truth. The experimental results are promising in terms of improvements in the Dice coefficient of the thalamus segmentation overstate-of-the-art atlas-based thalamus segmentation algorithms.

  13. Word-addressable holographic memory using symbolic substitution and SLRs

    NASA Astrophysics Data System (ADS)

    McAulay, Alastair D.; Wang, Junqing

    1990-12-01

    A heteroassociative memory is proposed that allows a key word in a dictionary of key words to be used to recall an associated holographic image in a database of images. Symbolic substitution search finds the word sought in a dictionary of key words and generates a beam that selects the corresponding holographic image from a directory of images. In this case, symbolic substitution is used to orthogonalize the key words. Spatial light rebroadcasters are proposed for the key word database. Experimental results demonstrate that symbolic substitution will enable a holographic image to be selected and reconstructed. In the case considered, a holographic image having over 40,000-bits is selected out of eight by using a key word from a dictionary of eight words.

  14. Ambulatory Care Data Base (ACDB) Data Dictionary Sequential Files of Phase 2.

    DTIC Science & Technology

    1992-04-01

    CEREBRAL ARTERIES 4340 CEREBRAL THROMBOSIS 43491 STROKE , ISCHEMIC 435 TRANSIENT CEREBRAL ISCHEMIA 4359 TRANSIENT ISCHEMIC ATTACK (TIA) 43591 TRANS...HYPERTENSIVE CRISIS 4373 ANEURYSM, CEREBRAL , NONRUPTURED 4374 ARTERITIS, CEREBRAL 4378 CEREBROVASCULAR DISEASE, OTHER ILL-DEFINED 43781 STROKE , LACUNAR...95950 MONITORING FOR LOCALIZATION OF CEREBRAL SEIZURE FOC 95999 OTHER NEUROLOGICAL DIAGNOSTIC PROCEDURES 96500 CHEMO INJ, SINGLE, PRE-MIX, PUSH 96501

  15. A Relational Data Dictionary Compatible with the National Bureau of Standards Information Resource Dictionary System.

    DTIC Science & Technology

    1985-12-01

    85 UNCLSSIFIED F/ 3/2 NL mhhhhhhhhhhhhl 4y 1.0 &32 MICROCOPY RESOLUTIOf TEST CKART. N NAVAL POSTGRADUATE SCHOOL Monterey, California DTIC ELECTE...Concern over corporate information resources has resulted from the explosive growth in the size, complexity and number of data bases available to...validity, and relevance, and usability of the data that is available. As a result , there has been a growing interest in two tools which .,%... provide

  16. Optimal Achievable Encoding for Brain Machine Interface

    DTIC Science & Technology

    2017-12-22

    dictionary-based encoding approach to translate a visual image into sequential patterns of electrical stimulation in real time , in a manner that...including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and completing and...networks, and by applying linear decoding to complete recorded populations of retinal ganglion cells for the first time . Third, we developed a greedy

  17. Booksearch: What Dictionary (General or Specialized) Do You Find Useful or Interesting for Students?

    ERIC Educational Resources Information Center

    English Journal, 1988

    1988-01-01

    Presents classroom teachers' recommendations for a variety of dictionaries that may heighten students' interest in language: a reverse dictionary, a visual dictionary, WEIGHTY WORD BOOK, a collegiate desk dictionary, OXFORD ENGLISH DICTIONARY, DICTIONARY OF AMERICAN REGIONAL ENGLISH, and a dictionary of idioms. (ARH)

  18. Ambulatory Care Data Base (ACDB) Data Dictionary Sequential Files of Phase 2.

    DTIC Science & Technology

    1992-04-01

    ARTERIES 4340 CEREBRAL THROMBOSIS 43491 STROKE , ISCHEMIC 435 TRANSIENT CEREBRAL ISCHEMIA 4359 TRANSIENT ISCHEMIC ATTACK (TIA) 43591 TRANS ISCHEMIC ATTACK W...CRISIS 4373 ANEURYSM, CEREBRAL , NONRUPTURED 4374 ARTERITIS, CEREBRAL 4378 CEREBROVASCULAR DISEASE, OTHER ILL-DEFINED 43781 STROKE , LACUNAR 438 LATE...FOR LOCALIZATION OF CEREBRAL SEIZURE FOC 95999 OTHER NEUROLOGICAL DIAGNOSTIC PROCEDURES 96500 CHEMO INJ, SINGLE, PRE-MIX, PUSH 96501 CHEMO INJ, SINGLE

  19. Application of a sparse representation method using K-SVD to data compression of experimental ambient vibration data for SHM

    NASA Astrophysics Data System (ADS)

    Noh, Hae Young; Kiremidjian, Anne S.

    2011-04-01

    This paper introduces a data compression method using the K-SVD algorithm and its application to experimental ambient vibration data for structural health monitoring purposes. Because many damage diagnosis algorithms that use system identification require vibration measurements of multiple locations, it is necessary to transmit long threads of data. In wireless sensor networks for structural health monitoring, however, data transmission is often a major source of battery consumption. Therefore, reducing the amount of data to transmit can significantly lengthen the battery life and reduce maintenance cost. The K-SVD algorithm was originally developed in information theory for sparse signal representation. This algorithm creates an optimal over-complete set of bases, referred to as a dictionary, using singular value decomposition (SVD) and represents the data as sparse linear combinations of these bases using the orthogonal matching pursuit (OMP) algorithm. Since ambient vibration data are stationary, we can segment them and represent each segment sparsely. Then only the dictionary and the sparse vectors of the coefficients need to be transmitted wirelessly for restoration of the original data. We applied this method to ambient vibration data measured from a four-story steel moment resisting frame. The results show that the method can compress the data efficiently and restore the data with very little error.

  20. Learners' Dictionaries: State of the Art. Anthology Series 23.

    ERIC Educational Resources Information Center

    Tickoo, Makhan L., Ed.

    A collection of articles on dictionaries for advanced second language learners includes essays on the past, present, and future of learners' dictionaries; alternative dictionaries; dictionary construction; and dictionaries and their users. Titles include: "Idle Thoughts of an Idle Fellow; or Vaticinations on the Learners' Dictionary"…

  1. The SMAP Dictionary Management System

    NASA Technical Reports Server (NTRS)

    Smith, Kevin A.; Swan, Christoper A.

    2014-01-01

    The Soil Moisture Active Passive (SMAP) Dictionary Management System is a web-based tool to develop and store a mission dictionary. A mission dictionary defines the interface between a ground system and a spacecraft. In recent years, mission dictionaries have grown in size and scope, making it difficult for engineers across multiple disciplines to coordinate the dictionary development effort. The Dictionary Management Systemaddresses these issues by placing all dictionary information in one place, taking advantage of the efficiencies inherent in co-locating what were once disparate dictionary development efforts.

  2. Spectral Unmixing With Multiple Dictionaries

    NASA Astrophysics Data System (ADS)

    Cohen, Jeremy E.; Gillis, Nicolas

    2018-02-01

    Spectral unmixing aims at recovering the spectral signatures of materials, called endmembers, mixed in a hyperspectral or multispectral image, along with their abundances. A typical assumption is that the image contains one pure pixel per endmember, in which case spectral unmixing reduces to identifying these pixels. Many fully automated methods have been proposed in recent years, but little work has been done to allow users to select areas where pure pixels are present manually or using a segmentation algorithm. Additionally, in a non-blind approach, several spectral libraries may be available rather than a single one, with a fixed number (or an upper or lower bound) of endmembers to chose from each. In this paper, we propose a multiple-dictionary constrained low-rank matrix approximation model that address these two problems. We propose an algorithm to compute this model, dubbed M2PALS, and its performance is discussed on both synthetic and real hyperspectral images.

  3. Dictionaries: British and American. The Language Library.

    ERIC Educational Resources Information Center

    Hulbert, James Root

    An account of the dictionaries, great and small, of the English-speaking world is given in this book. Subjects covered include the origin of English dictionaries, early dictionaries, Noah Webster and his successors to the present, abridged dictionaries, "The Oxford English Dictionary" and later dictionaries patterned after it, the…

  4. Learning a common dictionary for subject-transfer decoding with resting calibration.

    PubMed

    Morioka, Hiroshi; Kanemura, Atsunori; Hirayama, Jun-ichiro; Shikauchi, Manabu; Ogawa, Takeshi; Ikeda, Shigeyuki; Kawanabe, Motoaki; Ishii, Shin

    2015-05-01

    Brain signals measured over a series of experiments have inherent variability because of different physical and mental conditions among multiple subjects and sessions. Such variability complicates the analysis of data from multiple subjects and sessions in a consistent way, and degrades the performance of subject-transfer decoding in a brain-machine interface (BMI). To accommodate the variability in brain signals, we propose 1) a method for extracting spatial bases (or a dictionary) shared by multiple subjects, by employing a signal-processing technique of dictionary learning modified to compensate for variations between subjects and sessions, and 2) an approach to subject-transfer decoding that uses the resting-state activity of a previously unseen target subject as calibration data for compensating for variations, eliminating the need for a standard calibration based on task sessions. Applying our methodology to a dataset of electroencephalography (EEG) recordings during a selective visual-spatial attention task from multiple subjects and sessions, where the variability compensation was essential for reducing the redundancy of the dictionary, we found that the extracted common brain activities were reasonable in the light of neuroscience knowledge. The applicability to subject-transfer decoding was confirmed by improved performance over existing decoding methods. These results suggest that analyzing multisubject brain activities on common bases by the proposed method enables information sharing across subjects with low-burden resting calibration, and is effective for practical use of BMI in variable environments. Copyright © 2015 Elsevier Inc. All rights reserved.

  5. The semantics of Chemical Markup Language (CML): dictionaries and conventions.

    PubMed

    Murray-Rust, Peter; Townsend, Joe A; Adams, Sam E; Phadungsukanan, Weerapong; Thomas, Jens

    2011-10-14

    The semantic architecture of CML consists of conventions, dictionaries and units. The conventions conform to a top-level specification and each convention can constrain compliant documents through machine-processing (validation). Dictionaries conform to a dictionary specification which also imposes machine validation on the dictionaries. Each dictionary can also be used to validate data in a CML document, and provide human-readable descriptions. An additional set of conventions and dictionaries are used to support scientific units. All conventions, dictionaries and dictionary elements are identifiable and addressable through unique URIs.

  6. The semantics of Chemical Markup Language (CML): dictionaries and conventions

    PubMed Central

    2011-01-01

    The semantic architecture of CML consists of conventions, dictionaries and units. The conventions conform to a top-level specification and each convention can constrain compliant documents through machine-processing (validation). Dictionaries conform to a dictionary specification which also imposes machine validation on the dictionaries. Each dictionary can also be used to validate data in a CML document, and provide human-readable descriptions. An additional set of conventions and dictionaries are used to support scientific units. All conventions, dictionaries and dictionary elements are identifiable and addressable through unique URIs. PMID:21999509

  7. HMA runoff data

    EPA Pesticide Factsheets

    Excel workbook, First sheet is data dictionary. second sheet is the data representing the abstraction for events with short antecedent dry period (less than 24 hr) This dataset is associated with the following publication:Brown , R., and M. Borst. Evaluating the Accuracy of Common Runoff Estimation Methods for New Impervious Hot-Mix Asphalt. Journal of Sustainable Water in the Built Environment. American Society of Civil Engineers (ASCE), New York, NY, USA, online, (2015).

  8. Dictionary of environmental quotations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rodes, B.K.; Odell, R.

    1997-12-31

    Here are more than 3,700 quotations in 143 categories -- from Acid Rain to Zoos -- that provide a comprehensive collection of the wise and witty observations about the natural environment. The dictionary will delight, provoke, and inform readers. It is at once stimulating, entertaining, and enlightening, with quotations that provide a complete range of human thought about nature and the environment. Quotations have been drawn from a variety of documented sources, including poems, proverbs, slogans, radio, and television, congressional hearings, magazines, and newspapers. The authors of the quotes range from a philosopher in pre-Christian times to a contemporary economist,more » from a poet who speaks of forests to an engineer concerned with air pollutants.« less

  9. Dynamic Textures Modeling via Joint Video Dictionary Learning.

    PubMed

    Wei, Xian; Li, Yuanxiang; Shen, Hao; Chen, Fang; Kleinsteuber, Martin; Wang, Zhongfeng

    2017-04-06

    Video representation is an important and challenging task in the computer vision community. In this paper, we consider the problem of modeling and classifying video sequences of dynamic scenes which could be modeled in a dynamic textures (DT) framework. At first, we assume that image frames of a moving scene can be modeled as a Markov random process. We propose a sparse coding framework, named joint video dictionary learning (JVDL), to model a video adaptively. By treating the sparse coefficients of image frames over a learned dictionary as the underlying "states", we learn an efficient and robust linear transition matrix between two adjacent frames of sparse events in time series. Hence, a dynamic scene sequence is represented by an appropriate transition matrix associated with a dictionary. In order to ensure the stability of JVDL, we impose several constraints on such transition matrix and dictionary. The developed framework is able to capture the dynamics of a moving scene by exploring both sparse properties and the temporal correlations of consecutive video frames. Moreover, such learned JVDL parameters can be used for various DT applications, such as DT synthesis and recognition. Experimental results demonstrate the strong competitiveness of the proposed JVDL approach in comparison with state-of-the-art video representation methods. Especially, it performs significantly better in dealing with DT synthesis and recognition on heavily corrupted data.

  10. Social media analysis during political turbulence

    PubMed Central

    Spiliotopoulos, Dimitris; V. Samaras, Christos; Pratikakis, Polyvios; Ioannidis, Sotiris; Fragopoulou, Paraskevi

    2017-01-01

    Today, a considerable proportion of the public political discourse on nationwide elections proceeds in Online Social Networks. Through analyzing this content, we can discover the major themes that prevailed during the discussion, investigate the temporal variation of positive and negative sentiment and examine the semantic proximity of these themes. According to existing studies, the results of similar tasks are heavily dependent on the quality and completeness of dictionaries for linguistic preprocessing, entity discovery and sentiment analysis. Additionally, noise reduction is achieved with methods for sarcasm detection and correction. Here we report on the application of these methods on the complete corpus of tweets regarding two local electoral events of worldwide impact: the Greek referendum of 2015 and the subsequent legislative elections. To this end, we compiled novel dictionaries for sentiment and entity detection for the Greek language tailored to these events. We subsequently performed volume analysis, sentiment analysis, sarcasm correction and topic modeling. Results showed that there was a strong anti-austerity sentiment accompanied with a critical view on European and Greek political actions. PMID:29088263

  11. Social media analysis during political turbulence.

    PubMed

    Antonakaki, Despoina; Spiliotopoulos, Dimitris; V Samaras, Christos; Pratikakis, Polyvios; Ioannidis, Sotiris; Fragopoulou, Paraskevi

    2017-01-01

    Today, a considerable proportion of the public political discourse on nationwide elections proceeds in Online Social Networks. Through analyzing this content, we can discover the major themes that prevailed during the discussion, investigate the temporal variation of positive and negative sentiment and examine the semantic proximity of these themes. According to existing studies, the results of similar tasks are heavily dependent on the quality and completeness of dictionaries for linguistic preprocessing, entity discovery and sentiment analysis. Additionally, noise reduction is achieved with methods for sarcasm detection and correction. Here we report on the application of these methods on the complete corpus of tweets regarding two local electoral events of worldwide impact: the Greek referendum of 2015 and the subsequent legislative elections. To this end, we compiled novel dictionaries for sentiment and entity detection for the Greek language tailored to these events. We subsequently performed volume analysis, sentiment analysis, sarcasm correction and topic modeling. Results showed that there was a strong anti-austerity sentiment accompanied with a critical view on European and Greek political actions.

  12. Mixed Methods for Mixed Reality: Understanding Users' Avatar Activities in Virtual Worlds

    ERIC Educational Resources Information Center

    Feldon, David F.; Kafai, Yasmin B.

    2008-01-01

    This paper examines the use of mixed methods for analyzing users' avatar-related activities in a virtual world. Server logs recorded keystroke-level activity for 595 participants over a six-month period in Whyville.net, an informal science website. Participants also completed surveys and participated in interviews regarding their experiences.…

  13. The Role of Dictionaries in Language Learning.

    ERIC Educational Resources Information Center

    White, Philip A.

    1997-01-01

    Examines assumptions about dictionaries, especially the bilingual dictionary, and suggests ways of integrating the monolingual dictionary into the second-language instructional process. Findings indicate that the monolingual dictionary can coexist with bilingual dictionaries within a foreign-language course if the latter are appropriately used as…

  14. Learning Category-Specific Dictionary and Shared Dictionary for Fine-Grained Image Categorization.

    PubMed

    Gao, Shenghua; Tsang, Ivor Wai-Hung; Ma, Yi

    2014-02-01

    This paper targets fine-grained image categorization by learning a category-specific dictionary for each category and a shared dictionary for all the categories. Such category-specific dictionaries encode subtle visual differences among different categories, while the shared dictionary encodes common visual patterns among all the categories. To this end, we impose incoherence constraints among the different dictionaries in the objective of feature coding. In addition, to make the learnt dictionary stable, we also impose the constraint that each dictionary should be self-incoherent. Our proposed dictionary learning formulation not only applies to fine-grained classification, but also improves conventional basic-level object categorization and other tasks such as event recognition. Experimental results on five data sets show that our method can outperform the state-of-the-art fine-grained image categorization frameworks as well as sparse coding based dictionary learning frameworks. All these results demonstrate the effectiveness of our method.

  15. Which Dictionary? A Review of the Leading Learners' Dictionaries.

    ERIC Educational Resources Information Center

    Nesi, Hilary

    Three major dictionaries designed for learners of English as a second language are reviewed, their elements and approaches compared and evaluated, their usefulness for different learners discussed, and recommendations for future dictionary improvement made. The dictionaries in question are the "Oxford Advanced Learner's Dictionary," the…

  16. French Dictionaries. Series: Specialised Bibliographies.

    ERIC Educational Resources Information Center

    Klaar, R. M.

    This is a list of French monolingual, French-English and English-French dictionaries available in December 1975. Dictionaries of etymology, phonetics, place names, proper names, and slang are included, as well as dictionaries for children and dictionaries of Belgian, Canadian, and Swiss French. Most other specialized dictionaries, encyclopedias,…

  17. Education Industry, Spring 2008

    DTIC Science & Technology

    2008-01-01

    lost over $2 billion dollars in business revenue due to cultural misunderstandings and over 30% of the firms stated that a monolingual workforce...1 The American Heritage Dictionary of the English Language – Fourth Edition, Houghton Mifflin Company (2000): 569. 2 Ibid., 895. 3 The New

  18. Which Desk Dictionary Is Best for Foreign Students of English?

    ERIC Educational Resources Information Center

    Yorkey, Richard

    1969-01-01

    "The American College Dictionary, "Funk and Wagnalls Standard College Dictionary," Webster's New World Dictionary of the American Language," The Random House Dictionary of the English Language," and Webster's Seventh New Collegiate Dictionary" are analyzed and ranked as to their usefulness for the foreign learner of English. (FWB)

  19. The Vertical Dust Profile Over Gale Crater, Mars

    NASA Astrophysics Data System (ADS)

    Guzewich, Scott D.; Newman, C. E.; Smith, M. D.; Moores, J. E.; Smith, C. L.; Moore, C.; Richardson, M. I.; Kass, D.; Kleinböhl, A.; Mischna, M.; Martín-Torres, F. J.; Zorzano-Mier, M.-P.; Battalio, M.

    2017-12-01

    We create a vertically coarse, but complete, profile of dust mixing ratio from the surface to the upper atmosphere over Gale Crater, Mars, using the frequent joint atmospheric observations of the orbiting Mars Climate Sounder (MCS) and the Mars Science Laboratory Curiosity rover. Using these data and an estimate of planetary boundary layer (PBL) depth from the MarsWRF general circulation model, we divide the vertical column into three regions. The first region is the Gale Crater PBL, the second is the MCS-sampled region, and the third is between these first two. We solve for a well-mixed dust mixing ratio within this third (middle) layer of atmosphere to complete the profile. We identify a unique seasonal cycle of dust within each atmospheric layer. Within the Gale PBL, dust mixing ratio maximizes near southern hemisphere summer solstice (Ls = 270°) and minimizes near winter solstice (Ls = 90-100°) with a smooth sinusoidal transition between them. However, the layer above Gale Crater and below the MCS-sampled region more closely follows the global opacity cycle and has a maximum in opacity near Ls = 240° and exhibits a local minimum (associated with the "solsticial pause" in dust storm activity) near Ls = 270°. With knowledge of the complete vertical dust profile, we can also assess the frequency of high-altitude dust layers over Gale. We determine that 36% of MCS profiles near Gale Crater contain an "absolute" high-altitude dust layer wherein the dust mixing ratio is the maximum in the entire vertical column.

  20. FBRDLR: Fast blind reconstruction approach with dictionary learning regularization for infrared microscopy spectra

    NASA Astrophysics Data System (ADS)

    Liu, Tingting; Liu, Hai; Chen, Zengzhao; Chen, Yingying; Wang, Shengming; Liu, Zhi; Zhang, Hao

    2018-05-01

    Infrared (IR) spectra are the fingerprints of the molecules, and the spectral band location closely relates to the structure of a molecule. Thus, specimen identification can be performed based on IR spectroscopy. However, spectrally overlapping components prevent the specific identification of hyperfine molecular information of different substances. In this paper, we propose a fast blind reconstruction approach for IR spectra, which is based on sparse and redundant representations over a dictionary. The proposed method recovers the spectrum with the discrete wavelet transform dictionary on its content. The experimental results demonstrate that the proposed method is superior because of the better performance when compared with other state-of-the-art methods. The method the authors used remove the instrument aging issue to a large extent, thus leading the reconstruction IR spectra a more convenient tool for extracting features of an unknown material and interpreting it.

  1. Development and validation of a complementary map to enhance the existing 1998 to 2008 Abbreviated Injury Scale map

    PubMed Central

    2011-01-01

    Introduction Many trauma registries have used the Abbreviated Injury Scale 1990 Revision Update 98 (AIS98) to classify injuries. In the current AIS version (Abbreviated Injury Scale 2005 Update 2008 - AIS08), injury classification and specificity differ substantially from AIS98, and the mapping tools provided in the AIS08 dictionary are incomplete. As a result, data from different AIS versions cannot currently be compared. The aim of this study was to develop an additional AIS98 to AIS08 mapping tool to complement the current AIS dictionary map, and then to evaluate the completed map (produced by combining these two maps) using double-coded data. The value of additional information provided by free text descriptions accompanying assigned codes was also assessed. Methods Using a modified Delphi process, a panel of expert AIS coders established plausible AIS08 equivalents for the 153 AIS98 codes which currently have no AIS08 map. A series of major trauma patients whose injuries had been double-coded in AIS98 and AIS08 was used to assess the maps; both of the AIS datasets had already been mapped to another AIS version using the AIS dictionary maps. Following application of the completed (enhanced) map with or without free text evaluation, up to six AIS codes were available for each injury. Datasets were assessed for agreement in injury severity measures, and the relative performances of the maps in accurately describing the trauma population were evaluated. Results The double-coded injuries sustained by 109 patients were used to assess the maps. For data conversion from AIS98, both the enhanced map and the enhanced map with free text description resulted in higher levels of accuracy and agreement with directly coded AIS08 data than the currently available dictionary map. Paired comparisons demonstrated significant differences between direct coding and the dictionary maps, but not with either of the enhanced maps. Conclusions The newly-developed AIS98 to AIS08 complementary map enabled transformation of the trauma population description given by AIS98 into an AIS08 estimate which was statistically indistinguishable from directly coded AIS08 data. It is recommended that the enhanced map should be adopted for dataset conversion, using free text descriptions if available. PMID:21548991

  2. Development and validation of a complementary map to enhance the existing 1998 to 2008 Abbreviated Injury Scale map.

    PubMed

    Palmer, Cameron S; Franklyn, Melanie; Read-Allsopp, Christine; McLellan, Susan; Niggemeyer, Louise E

    2011-05-08

    Many trauma registries have used the Abbreviated Injury Scale 1990 Revision Update 98 (AIS98) to classify injuries. In the current AIS version (Abbreviated Injury Scale 2005 Update 2008 - AIS08), injury classification and specificity differ substantially from AIS98, and the mapping tools provided in the AIS08 dictionary are incomplete. As a result, data from different AIS versions cannot currently be compared. The aim of this study was to develop an additional AIS98 to AIS08 mapping tool to complement the current AIS dictionary map, and then to evaluate the completed map (produced by combining these two maps) using double-coded data. The value of additional information provided by free text descriptions accompanying assigned codes was also assessed. Using a modified Delphi process, a panel of expert AIS coders established plausible AIS08 equivalents for the 153 AIS98 codes which currently have no AIS08 map. A series of major trauma patients whose injuries had been double-coded in AIS98 and AIS08 was used to assess the maps; both of the AIS datasets had already been mapped to another AIS version using the AIS dictionary maps. Following application of the completed (enhanced) map with or without free text evaluation, up to six AIS codes were available for each injury. Datasets were assessed for agreement in injury severity measures, and the relative performances of the maps in accurately describing the trauma population were evaluated. The double-coded injuries sustained by 109 patients were used to assess the maps. For data conversion from AIS98, both the enhanced map and the enhanced map with free text description resulted in higher levels of accuracy and agreement with directly coded AIS08 data than the currently available dictionary map. Paired comparisons demonstrated significant differences between direct coding and the dictionary maps, but not with either of the enhanced maps. The newly-developed AIS98 to AIS08 complementary map enabled transformation of the trauma population description given by AIS98 into an AIS08 estimate which was statistically indistinguishable from directly coded AIS08 data. It is recommended that the enhanced map should be adopted for dataset conversion, using free text descriptions if available.

  3. Change detection in Arctic satellite imagery using clustering of sparse approximations (CoSA) over learned feature dictionaries

    NASA Astrophysics Data System (ADS)

    Moody, Daniela I.; Wilson, Cathy J.; Rowland, Joel C.; Altmann, Garrett L.

    2015-06-01

    Advanced pattern recognition and computer vision algorithms are of great interest for landscape characterization, change detection, and change monitoring in satellite imagery, in support of global climate change science and modeling. We present results from an ongoing effort to extend neuroscience-inspired models for feature extraction to the environmental sciences, and we demonstrate our work using Worldview-2 multispectral satellite imagery. We use a Hebbian learning rule to derive multispectral, multiresolution dictionaries directly from regional satellite normalized band difference index data. These feature dictionaries are used to build sparse scene representations, from which we automatically generate land cover labels via our CoSA algorithm: Clustering of Sparse Approximations. These data adaptive feature dictionaries use joint spectral and spatial textural characteristics to help separate geologic, vegetative, and hydrologic features. Land cover labels are estimated in example Worldview-2 satellite images of Barrow, Alaska, taken at two different times, and are used to detect and discuss seasonal surface changes. Our results suggest that an approach that learns from both spectral and spatial features is promising for practical pattern recognition problems in high resolution satellite imagery.

  4. Low-dose CT image reconstruction using gain intervention-based dictionary learning

    NASA Astrophysics Data System (ADS)

    Pathak, Yadunath; Arya, K. V.; Tiwari, Shailendra

    2018-05-01

    Computed tomography (CT) approach is extensively utilized in clinical diagnoses. However, X-ray residue in human body may introduce somatic damage such as cancer. Owing to radiation risk, research has focused on the radiation exposure distributed to patients through CT investigations. Therefore, low-dose CT has become a significant research area. Many researchers have proposed different low-dose CT reconstruction techniques. But, these techniques suffer from various issues such as over smoothing, artifacts, noise, etc. Therefore, in this paper, we have proposed a novel integrated low-dose CT reconstruction technique. The proposed technique utilizes global dictionary-based statistical iterative reconstruction (GDSIR) and adaptive dictionary-based statistical iterative reconstruction (ADSIR)-based reconstruction techniques. In case the dictionary (D) is predetermined, then GDSIR can be used and if D is adaptively defined then ADSIR is appropriate choice. The gain intervention-based filter is also used as a post-processing technique for removing the artifacts from low-dose CT reconstructed images. Experiments have been done by considering the proposed and other low-dose CT reconstruction techniques on well-known benchmark CT images. Extensive experiments have shown that the proposed technique outperforms the available approaches.

  5. When Cancer Returns

    MedlinePlus

    ... content 1-800-4-CANCER Live Chat Publications Dictionary Menu Contact Dictionary Search About Cancer Causes and Prevention Risk Factors ... Levels of Evidence: Integrative Therapies Fact Sheets NCI Dictionaries NCI Dictionary of Cancer Terms NCI Drug Dictionary ...

  6. Coping with Advanced Cancer

    MedlinePlus

    ... content 1-800-4-CANCER Live Chat Publications Dictionary Menu Contact Dictionary Search About Cancer Causes and Prevention Risk Factors ... Levels of Evidence: Integrative Therapies Fact Sheets NCI Dictionaries NCI Dictionary of Cancer Terms NCI Drug Dictionary ...

  7. High-Order Local Pooling and Encoding Gaussians Over a Dictionary of Gaussians.

    PubMed

    Li, Peihua; Zeng, Hui; Wang, Qilong; Shiu, Simon C K; Zhang, Lei

    2017-07-01

    Local pooling (LP) in configuration (feature) space proposed by Boureau et al. explicitly restricts similar features to be aggregated, which can preserve as much discriminative information as possible. At the time it appeared, this method combined with sparse coding achieved competitive classification results with only a small dictionary. However, its performance lags far behind the state-of-the-art results as only the zero-order information is exploited. Inspired by the success of high-order statistical information in existing advanced feature coding or pooling methods, we make an attempt to address the limitation of LP. To this end, we present a novel method called high-order LP (HO-LP) to leverage the information higher than the zero-order one. Our idea is intuitively simple: we compute the first- and second-order statistics per configuration bin and model them as a Gaussian. Accordingly, we employ a collection of Gaussians as visual words to represent the universal probability distribution of features from all classes. Our problem is naturally formulated as encoding Gaussians over a dictionary of Gaussians as visual words. This problem, however, is challenging since the space of Gaussians is not a Euclidean space but forms a Riemannian manifold. We address this challenge by mapping Gaussians into the Euclidean space, which enables us to perform coding with common Euclidean operations rather than complex and often expensive Riemannian operations. Our HO-LP preserves the advantages of the original LP: pooling only similar features and using a small dictionary. Meanwhile, it achieves very promising performance on standard benchmarks, with either conventional, hand-engineered features or deep learning-based features.

  8. Thinking about Complementary and Alternative Medicine

    MedlinePlus

    ... content 1-800-4-CANCER Live Chat Publications Dictionary Menu Contact Dictionary Search About Cancer Causes and Prevention Risk Factors ... Levels of Evidence: Integrative Therapies Fact Sheets NCI Dictionaries NCI Dictionary of Cancer Terms NCI Drug Dictionary ...

  9. Caring for the Caregiver

    MedlinePlus

    ... Español 1-800-4-CANCER Live Chat Publications Dictionary Menu Contact Dictionary Search About Cancer Causes and Prevention Risk Factors ... Levels of Evidence: Integrative Therapies Fact Sheets NCI Dictionaries NCI Dictionary of Cancer Terms NCI Drug Dictionary ...

  10. The Use of Monolingual Mobile Dictionaries in the Context of Reading by Intermediate Cantonese EFL Learners in Hong Kong

    ERIC Educational Resources Information Center

    Zou, Di; Xie, Haoran; Wang, Fu Lee

    2015-01-01

    Previous studies on dictionary consultation investigated mainly online dictionaries or simple pocket electronic dictionaries as they were commonly used among learners back then, yet the more updated mobile dictionaries were superficially investigated though they have already replaced the pocket electronic dictionaries. These studies are also…

  11. The Power of Math Dictionaries in the Classroom

    ERIC Educational Resources Information Center

    Patterson, Lynn Gannon; Young, Ashlee Futrell

    2013-01-01

    This article investigates the value of a math dictionary in the elementary classroom and if elementary students prefer using a traditional math dictionary or a dictionary on an iPad. In each child's journey to reading with understanding, the dictionary can be a comforting and valuable resource. Would students find a math dictionary to be a…

  12. Cancer Information Summaries: Screening/Detection

    MedlinePlus

    ... Español 1-800-4-CANCER Live Chat Publications Dictionary Menu Contact Dictionary Search About Cancer Causes and Prevention Risk Factors ... Levels of Evidence: Integrative Therapies Fact Sheets NCI Dictionaries NCI Dictionary of Cancer Terms NCI Drug Dictionary ...

  13. Children with Cancer: A Guide for Parents

    MedlinePlus

    ... content 1-800-4-CANCER Live Chat Publications Dictionary Menu Contact Dictionary Search About Cancer Causes and Prevention Risk Factors ... Levels of Evidence: Integrative Therapies Fact Sheets NCI Dictionaries NCI Dictionary of Cancer Terms NCI Drug Dictionary ...

  14. Person-Environment Congruence as a Predictor of Customer Service Performance.

    ERIC Educational Resources Information Center

    Fritzsche, Barbara A.; Powell, Amy B.; Hoffman, Russell

    1999-01-01

    Customer service representatives (n=90) completed the Position Classification Inventory (PCI), Self-Directed Search, and a cognitive ability test. PCI was similar to the Dictionary of Holland Occupational Codes in predicting performance. Cognitive ability was not significantly correlated with performance. Person/environment fit was supported as a…

  15. Student Outcomes 2009: Data Dictionary. Support Document

    ERIC Educational Resources Information Center

    National Centre for Vocational Education Research (NCVER), 2009

    2009-01-01

    This document was produced as an added resource for the report "Outcomes from the Productivity Places Program, 2009." The study reported the outcomes for students who completed their vocational education and training (VET) under the Productivity Places Program (PPP) during 2008. This document presents an alphabetical arrangement of the…

  16. Assistive Software Tools for Secondary-Level Students with Literacy Difficulties

    ERIC Educational Resources Information Center

    Lange, Alissa A.; McPhillips, Martin; Mulhern, Gerry; Wylie, Judith

    2006-01-01

    The present study assessed the compensatory effectiveness of four assistive software tools (speech synthesis, spellchecker, homophone tool, and dictionary) on literacy. Secondary-level students (N = 93) with reading difficulties completed computer-based tests of literacy skills. Training on their respective software followed for those assigned to…

  17. Usage Notes in the Oxford American Dictionary.

    ERIC Educational Resources Information Center

    Berner, R. Thomas

    1981-01-01

    Compares the "Oxford American Dictionary" with the "American Heritage Dictionary." Examines the dictionaries' differences in philosophies of language, introductory essays, and usage notes. Concludes that the "Oxford American Dictionary" is too conservative, paternalistic, and dogmatic for the 1980s. (DMM)

  18. Treatment Choices for Men with Early-Stage Prostate Cancer

    MedlinePlus

    ... content 1-800-4-CANCER Live Chat Publications Dictionary Menu Contact Dictionary Search About Cancer Causes and Prevention Risk Factors ... Levels of Evidence: Integrative Therapies Fact Sheets NCI Dictionaries NCI Dictionary of Cancer Terms NCI Drug Dictionary ...

  19. Pain Control: Support for People with Cancer

    MedlinePlus

    ... Español 1-800-4-CANCER Live Chat Publications Dictionary Menu Contact Dictionary Search About Cancer Causes and Prevention Risk Factors ... Levels of Evidence: Integrative Therapies Fact Sheets NCI Dictionaries NCI Dictionary of Cancer Terms NCI Drug Dictionary ...

  20. Chemotherapy and You: Support for People with Cancer

    MedlinePlus

    ... Español 1-800-4-CANCER Live Chat Publications Dictionary Menu Contact Dictionary Search About Cancer Causes and Prevention Risk Factors ... Levels of Evidence: Integrative Therapies Fact Sheets NCI Dictionaries NCI Dictionary of Cancer Terms NCI Drug Dictionary ...

  1. Facing Forward Series: Life After Cancer Treatment

    MedlinePlus

    ... Español 1-800-4-CANCER Live Chat Publications Dictionary Menu Contact Dictionary Search About Cancer Causes and Prevention Risk Factors ... Levels of Evidence: Integrative Therapies Fact Sheets NCI Dictionaries NCI Dictionary of Cancer Terms NCI Drug Dictionary ...

  2. Eating Hints: Before, During, and After Cancer Treatment

    MedlinePlus

    ... Español 1-800-4-CANCER Live Chat Publications Dictionary Menu Contact Dictionary Search About Cancer Causes and Prevention Risk Factors ... Levels of Evidence: Integrative Therapies Fact Sheets NCI Dictionaries NCI Dictionary of Cancer Terms NCI Drug Dictionary ...

  3. Taking Time: Support for People with Cancer

    MedlinePlus

    ... Español 1-800-4-CANCER Live Chat Publications Dictionary Menu Contact Dictionary Search About Cancer Causes and Prevention Risk Factors ... Levels of Evidence: Integrative Therapies Fact Sheets NCI Dictionaries NCI Dictionary of Cancer Terms NCI Drug Dictionary ...

  4. Radiation Therapy and You: Support for People with Cancer

    MedlinePlus

    ... Español 1-800-4-CANCER Live Chat Publications Dictionary Menu Contact Dictionary Search About Cancer Causes and Prevention Risk Factors ... Levels of Evidence: Integrative Therapies Fact Sheets NCI Dictionaries NCI Dictionary of Cancer Terms NCI Drug Dictionary ...

  5. Sparse representation of whole-brain fMRI signals for identification of functional networks.

    PubMed

    Lv, Jinglei; Jiang, Xi; Li, Xiang; Zhu, Dajiang; Chen, Hanbo; Zhang, Tuo; Zhang, Shu; Hu, Xintao; Han, Junwei; Huang, Heng; Zhang, Jing; Guo, Lei; Liu, Tianming

    2015-02-01

    There have been several recent studies that used sparse representation for fMRI signal analysis and activation detection based on the assumption that each voxel's fMRI signal is linearly composed of sparse components. Previous studies have employed sparse coding to model functional networks in various modalities and scales. These prior contributions inspired the exploration of whether/how sparse representation can be used to identify functional networks in a voxel-wise way and on the whole brain scale. This paper presents a novel, alternative methodology of identifying multiple functional networks via sparse representation of whole-brain task-based fMRI signals. Our basic idea is that all fMRI signals within the whole brain of one subject are aggregated into a big data matrix, which is then factorized into an over-complete dictionary basis matrix and a reference weight matrix via an effective online dictionary learning algorithm. Our extensive experimental results have shown that this novel methodology can uncover multiple functional networks that can be well characterized and interpreted in spatial, temporal and frequency domains based on current brain science knowledge. Importantly, these well-characterized functional network components are quite reproducible in different brains. In general, our methods offer a novel, effective and unified solution to multiple fMRI data analysis tasks including activation detection, de-activation detection, and functional network identification. Copyright © 2014 Elsevier B.V. All rights reserved.

  6. Functional brain networks reconstruction using group sparsity-regularized learning.

    PubMed

    Zhao, Qinghua; Li, Will X Y; Jiang, Xi; Lv, Jinglei; Lu, Jianfeng; Liu, Tianming

    2018-06-01

    Investigating functional brain networks and patterns using sparse representation of fMRI data has received significant interests in the neuroimaging community. It has been reported that sparse representation is effective in reconstructing concurrent and interactive functional brain networks. To date, most of data-driven network reconstruction approaches rarely take consideration of anatomical structures, which are the substrate of brain function. Furthermore, it has been rarely explored whether structured sparse representation with anatomical guidance could facilitate functional networks reconstruction. To address this problem, in this paper, we propose to reconstruct brain networks utilizing the structure guided group sparse regression (S2GSR) in which 116 anatomical regions from the AAL template, as prior knowledge, are employed to guide the network reconstruction when performing sparse representation of whole-brain fMRI data. Specifically, we extract fMRI signals from standard space aligned with the AAL template. Then by learning a global over-complete dictionary, with the learned dictionary as a set of features (regressors), the group structured regression employs anatomical structures as group information to regress whole brain signals. Finally, the decomposition coefficients matrix is mapped back to the brain volume to represent functional brain networks and patterns. We use the publicly available Human Connectome Project (HCP) Q1 dataset as the test bed, and the experimental results indicate that the proposed anatomically guided structure sparse representation is effective in reconstructing concurrent functional brain networks.

  7. Evaluation of the Expressiveness of an ICNP-based Nursing Data Dictionary in a Computerized Nursing Record System

    PubMed Central

    Cho, InSook; Park, Hyeoun-Ae

    2006-01-01

    This study evaluated the domain completeness and expressiveness issues of the International Classification for Nursing Practice-based (ICNP) nursing data dictionary (NDD) through its application in an enterprise electronic medical record (EMR) system as a standard vocabulary at a single tertiary hospital in Korea. Data from 2,262 inpatients obtained over a period of 9 weeks (May to July 2003) were extracted from the EMR system for analysis. Among the 530,218 data-input events, 401,190 (75.7%) were entered from the NDD, 20,550 (3.9%) used only free text, and 108,478 (20.4%) used a combination of coded data and free text. A content analysis of the free-text events showed that 80.3% of the expressions could be found in the NDD, whereas 10.9% were context-specific expressions such as direct quotations of patient complaints and responses, and references to the care plan or orders of physicians. A total of 7.8% of the expressions was used for a supplementary purpose such as adding a conjunction or end verb to make an expression appear as natural language. Only 1.0% of the expressions were identified as not being covered by the NDD. This evaluation study demonstrates that the ICNP-based NDD has sufficient power to cover most of the expressions used in a clinical nursing setting. PMID:16622170

  8. Highly undersampled MR image reconstruction using an improved dual-dictionary learning method with self-adaptive dictionaries.

    PubMed

    Li, Jiansen; Song, Ying; Zhu, Zhen; Zhao, Jun

    2017-05-01

    Dual-dictionary learning (Dual-DL) method utilizes both a low-resolution dictionary and a high-resolution dictionary, which are co-trained for sparse coding and image updating, respectively. It can effectively exploit a priori knowledge regarding the typical structures, specific features, and local details of training sets images. The prior knowledge helps to improve the reconstruction quality greatly. This method has been successfully applied in magnetic resonance (MR) image reconstruction. However, it relies heavily on the training sets, and dictionaries are fixed and nonadaptive. In this research, we improve Dual-DL by using self-adaptive dictionaries. The low- and high-resolution dictionaries are updated correspondingly along with the image updating stage to ensure their self-adaptivity. The updated dictionaries incorporate both the prior information of the training sets and the test image directly. Both dictionaries feature improved adaptability. Experimental results demonstrate that the proposed method can efficiently and significantly improve the quality and robustness of MR image reconstruction.

  9. Seismic data interpolation and denoising by learning a tensor tight frame

    NASA Astrophysics Data System (ADS)

    Liu, Lina; Plonka, Gerlind; Ma, Jianwei

    2017-10-01

    Seismic data interpolation and denoising plays a key role in seismic data processing. These problems can be understood as sparse inverse problems, where the desired data are assumed to be sparsely representable within a suitable dictionary. In this paper, we present a new method based on a data-driven tight frame (DDTF) of Kronecker type (KronTF) that avoids the vectorization step and considers the multidimensional structure of data in a tensor-product way. It takes advantage of the structure contained in all different modes (dimensions) simultaneously. In order to overcome the limitations of a usual tensor-product approach we also incorporate data-driven directionality. The complete method is formulated as a sparsity-promoting minimization problem. It includes two main steps. In the first step, a hard thresholding algorithm is used to update the frame coefficients of the data in the dictionary; in the second step, an iterative alternating method is used to update the tight frame (dictionary) in each different mode. The dictionary that is learned in this way contains the principal components in each mode. Furthermore, we apply the proposed KronTF to seismic interpolation and denoising. Examples with synthetic and real seismic data show that the proposed method achieves better results than the traditional projection onto convex sets method based on the Fourier transform and the previous vectorized DDTF methods. In particular, the simple structure of the new frame construction makes it essentially more efficient.

  10. Dictionary learning based noisy image super-resolution via distance penalty weight model

    PubMed Central

    Han, Yulan; Zhao, Yongping; Wang, Qisong

    2017-01-01

    In this study, we address the problem of noisy image super-resolution. Noisy low resolution (LR) image is always obtained in applications, while most of the existing algorithms assume that the LR image is noise-free. As to this situation, we present an algorithm for noisy image super-resolution which can achieve simultaneously image super-resolution and denoising. And in the training stage of our method, LR example images are noise-free. For different input LR images, even if the noise variance varies, the dictionary pair does not need to be retrained. For the input LR image patch, the corresponding high resolution (HR) image patch is reconstructed through weighted average of similar HR example patches. To reduce computational cost, we use the atoms of learned sparse dictionary as the examples instead of original example patches. We proposed a distance penalty model for calculating the weight, which can complete a second selection on similar atoms at the same time. Moreover, LR example patches removed mean pixel value are also used to learn dictionary rather than just their gradient features. Based on this, we can reconstruct initial estimated HR image and denoised LR image. Combined with iterative back projection, the two reconstructed images are applied to obtain final estimated HR image. We validate our algorithm on natural images and compared with the previously reported algorithms. Experimental results show that our proposed method performs better noise robustness. PMID:28759633

  11. What Dictionary to Use? A Closer Look at the "Oxford Advanced Learner's Dictionary," the "Longman Dictionary of Contemporary English" and the "Longman Lexicon of Contempory English."

    ERIC Educational Resources Information Center

    Shaw, A. M.

    1983-01-01

    Three dictionaries are compared for their usefulness to teachers of English as a foreign language, teachers in training, students, and other users of English as a foreign language. The issue of monolingual versus bilingual dictionary format is discussed, and a previous analysis of the two bilingual dictionaries is summarized. Pronunciation…

  12. Clostridium difficile Infection

    MedlinePlus

    ... These drugs can make your infection worse. Certain probiotics, or “good bacteria,” may help prevent repeat C. ... Your Doctor Drugs, Procedures & Devices Over-the-counter Products Procedures & Devices Prescription Medicines Health Tools Dictionary Symptom ...

  13. DICTIONARIES AND LANGUAGE CHANGE.

    ERIC Educational Resources Information Center

    POOLEY, ROBERT C.

    TWO VIEWS OF A DICTIONARY'S PURPOSE CAME INTO SHARP CONFLICT UPON THE PUBLICATION OF WEBSTER'S "THIRD NEW INTERNATIONAL UNABRIDGED DICTIONARY." THE FIRST VIEW IS THAT A DICTIONARY IS A REFERENCE BOOK ON LANGUAGE ETIQUETTE, AN AUTHORITY FOR MAINTAINING THE PURITY OF THE ENGLISH LANGUAGE. THE SECOND IS THAT A DICTIONARY IS A SCIENTIFIC…

  14. Do Dictionaries Help Students Write?

    ERIC Educational Resources Information Center

    Nesi, Hilary

    Examples are given of real lexical errors made by learner writers, and consideration is given to the way in which three learners' dictionaries could deal with the lexical items that were misused. The dictionaries were the "Oxford Advanced Learner's Dictionary," the "Longman Dictionary of Contemporary English," and the "Chambers Universal Learners'…

  15. Information on Quantifiers and Argument Structure in English Learner's Dictionaries.

    ERIC Educational Resources Information Center

    Lee, Thomas Hun-tak

    1993-01-01

    Lexicographers have been arguing for the inclusion of abstract and complex grammatical information in dictionaries. This paper examines the extent to which information about quantifiers and the argument structure of verbs is encoded in English learner's dictionaries. The Oxford Advanced Learner's Dictionary (1989), the Longman Dictionary of…

  16. Students' Understanding of Dictionary Entries: A Study with Respect to Four Learners' Dictionaries.

    ERIC Educational Resources Information Center

    Jana, Abhra; Amritavalli, Vijaya; Amritavalli, R.

    2003-01-01

    Investigates the effects of definitional information in the form of dictionary entries, on second language learners' vocabulary learning in an instructed setting. Indian students (Native Hindi speakers) of English received monolingual English dictionary entries of five previously unknown words from four different learner's dictionaries. Results…

  17. Seismic classification through sparse filter dictionaries

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hickmann, Kyle Scott; Srinivasan, Gowri

    We tackle a multi-label classi cation problem involving the relation between acoustic- pro le features and the measured seismogram. To isolate components of the seismo- grams unique to each class of acoustic pro le we build dictionaries of convolutional lters. The convolutional- lter dictionaries for the individual classes are then combined into a large dictionary for the entire seismogram set. A given seismogram is classi ed by computing its representation in the large dictionary and then comparing reconstruction accuracy with this representation using each of the sub-dictionaries. The sub-dictionary with the minimal reconstruction error identi es the seismogram class.

  18. Adaptive structured dictionary learning for image fusion based on group-sparse-representation

    NASA Astrophysics Data System (ADS)

    Yang, Jiajie; Sun, Bin; Luo, Chengwei; Wu, Yuzhong; Xu, Limei

    2018-04-01

    Dictionary learning is the key process of sparse representation which is one of the most widely used image representation theories in image fusion. The existing dictionary learning method does not use the group structure information and the sparse coefficients well. In this paper, we propose a new adaptive structured dictionary learning algorithm and a l1-norm maximum fusion rule that innovatively utilizes grouped sparse coefficients to merge the images. In the dictionary learning algorithm, we do not need prior knowledge about any group structure of the dictionary. By using the characteristics of the dictionary in expressing the signal, our algorithm can automatically find the desired potential structure information that hidden in the dictionary. The fusion rule takes the physical meaning of the group structure dictionary, and makes activity-level judgement on the structure information when the images are being merged. Therefore, the fused image can retain more significant information. Comparisons have been made with several state-of-the-art dictionary learning methods and fusion rules. The experimental results demonstrate that, the dictionary learning algorithm and the fusion rule both outperform others in terms of several objective evaluation metrics.

  19. Evaluation of the sparse coding super-resolution method for improving image quality of up-sampled images in computed tomography

    NASA Astrophysics Data System (ADS)

    Ota, Junko; Umehara, Kensuke; Ishimaru, Naoki; Ohno, Shunsuke; Okamoto, Kentaro; Suzuki, Takanori; Shirai, Naoki; Ishida, Takayuki

    2017-02-01

    As the capability of high-resolution displays grows, high-resolution images are often required in Computed Tomography (CT). However, acquiring high-resolution images takes a higher radiation dose and a longer scanning time. In this study, we applied the Sparse-coding-based Super-Resolution (ScSR) method to generate high-resolution images without increasing the radiation dose. We prepared the over-complete dictionary learned the mapping between low- and highresolution patches and seek a sparse representation of each patch of the low-resolution input. These coefficients were used to generate the high-resolution output. For evaluation, 44 CT cases were used as the test dataset. We up-sampled images up to 2 or 4 times and compared the image quality of the ScSR scheme and bilinear and bicubic interpolations, which are the traditional interpolation schemes. We also compared the image quality of three learning datasets. A total of 45 CT images, 91 non-medical images, and 93 chest radiographs were used for dictionary preparation respectively. The image quality was evaluated by measuring peak signal-to-noise ratio (PSNR) and structure similarity (SSIM). The differences of PSNRs and SSIMs between the ScSR method and interpolation methods were statistically significant. Visual assessment confirmed that the ScSR method generated a high-resolution image with sharpness, whereas conventional interpolation methods generated over-smoothed images. To compare three different training datasets, there were no significance between the CT, the CXR and non-medical datasets. These results suggest that the ScSR provides a robust approach for application of up-sampling CT images and yields substantial high image quality of extended images in CT.

  20. Psoriasis image representation using patch-based dictionary learning for erythema severity scoring.

    PubMed

    George, Yasmeen; Aldeen, Mohammad; Garnavi, Rahil

    2018-06-01

    Psoriasis is a chronic skin disease which can be life-threatening. Accurate severity scoring helps dermatologists to decide on the treatment. In this paper, we present a semi-supervised computer-aided system for automatic erythema severity scoring in psoriasis images. Firstly, the unsupervised stage includes a novel image representation method. We construct a dictionary, which is then used in the sparse representation for local feature extraction. To acquire the final image representation vector, an aggregation method is exploited over the local features. Secondly, the supervised phase is where various multi-class machine learning (ML) classifiers are trained for erythema severity scoring. Finally, we compare the proposed system with two popular unsupervised feature extractor methods, namely: bag of visual words model (BoVWs) and AlexNet pretrained model. Root mean square error (RMSE) and F1 score are used as performance measures for the learned dictionaries and the trained ML models, respectively. A psoriasis image set consisting of 676 images, is used in this study. Experimental results demonstrate that the use of the proposed procedure can provide a setup where erythema scoring is accurate and consistent. Also, it is revealed that dictionaries with large number of atoms and small patch sizes yield the best representative erythema severity features. Further, random forest (RF) outperforms other classifiers with F1 score 0.71, followed by support vector machine (SVM) and boosting with 0.66 and 0.64 scores, respectively. Furthermore, the conducted comparative studies confirm the effectiveness of the proposed approach with improvement of 9% and 12% over BoVWs and AlexNet based features, respectively. Crown Copyright © 2018. Published by Elsevier Ltd. All rights reserved.

  1. Revealing topics and their evolution in biomedical literature using Bio-DTM: a case study of ginseng.

    PubMed

    Chen, Qian; Ai, Ni; Liao, Jie; Shao, Xin; Liu, Yufeng; Fan, Xiaohui

    2017-01-01

    Valuable scientific results on biomedicine are very rich, but they are widely scattered in the literature. Topic modeling enables researchers to discover themes from an unstructured collection of documents without any prior annotations or labels. In this paper, taking ginseng as an example, biological dynamic topic model (Bio-DTM) was proposed to conduct a retrospective study and interpret the temporal evolution of the research of ginseng. The system of Bio-DTM mainly includes four components, documents pre-processing, bio-dictionary construction, dynamic topic models, topics analysis and visualization. Scientific articles pertaining to ginseng were retrieved through text mining from PubMed. The bio-dictionary integrates MedTerms medical dictionary, the second edition of side effect resource, a dictionary of biology and HGNC database of human gene names (HGNC). A dynamic topic model, a text mining technique, was used to emphasize on capturing the development trends of topics in a sequentially collected documents. Besides the contents of topics taken on, the evolution of topics was visualized over time using ThemeRiver. From the topic 9, ginseng was used in dietary supplements and complementary and integrative health practices, and became very popular since the early twentieth century. Topic 6 reminded that the planting of ginseng is a major area of research and symbiosis and allelopathy of ginseng became a research hotspot in 2007. In addition, the Bio-DTM model gave an insight into the main pharmacologic effects of ginseng, such as anti-metabolic disorder effect, cardioprotective effect, anti-cancer effect, hepatoprotective effect, anti-thrombotic effect and neuroprotective effect. The Bio-DTM model not only discovers what ginseng's research involving in but also displays how these topics evolving over time. This approach can be applied to the biomedical field to conduct a retrospective study and guide future studies.

  2. Dictionary of Microscopy

    NASA Astrophysics Data System (ADS)

    Heath, Julian

    2005-10-01

    The past decade has seen huge advances in the application of microscopy in all areas of science. This welcome development in microscopy has been paralleled by an expansion of the vocabulary of technical terms used in microscopy: terms have been coined for new instruments and techniques and, as microscopes reach even higher resolution, the use of terms that relate to the optical and physical principles underpinning microscopy is now commonplace. The Dictionary of Microscopy was compiled to meet this challenge and provides concise definitions of over 2,500 terms used in the fields of light microscopy, electron microscopy, scanning probe microscopy, x-ray microscopy and related techniques. Written by Dr Julian P. Heath, Editor of Microscopy and Analysis, the dictionary is intended to provide easy navigation through the microscopy terminology and to be a first point of reference for definitions of new and established terms. The Dictionary of Microscopy is an essential, accessible resource for: students who are new to the field and are learning about microscopes equipment purchasers who want an explanation of the terms used in manufacturers' literature scientists who are considering using a new microscopical technique experienced microscopists as an aide mémoire or quick source of reference librarians, the press and marketing personnel who require definitions for technical reports.

  3. Coupled dictionary learning for joint MR image restoration and segmentation

    NASA Astrophysics Data System (ADS)

    Yang, Xuesong; Fan, Yong

    2018-03-01

    To achieve better segmentation of MR images, image restoration is typically used as a preprocessing step, especially for low-quality MR images. Recent studies have demonstrated that dictionary learning methods could achieve promising performance for both image restoration and image segmentation. These methods typically learn paired dictionaries of image patches from different sources and use a common sparse representation to characterize paired image patches, such as low-quality image patches and their corresponding high quality counterparts for the image restoration, and image patches and their corresponding segmentation labels for the image segmentation. Since learning these dictionaries jointly in a unified framework may improve the image restoration and segmentation simultaneously, we propose a coupled dictionary learning method to concurrently learn dictionaries for joint image restoration and image segmentation based on sparse representations in a multi-atlas image segmentation framework. Particularly, three dictionaries, including a dictionary of low quality image patches, a dictionary of high quality image patches, and a dictionary of segmentation label patches, are learned in a unified framework so that the learned dictionaries of image restoration and segmentation can benefit each other. Our method has been evaluated for segmenting the hippocampus in MR T1 images collected with scanners of different magnetic field strengths. The experimental results have demonstrated that our method achieved better image restoration and segmentation performance than state of the art dictionary learning and sparse representation based image restoration and image segmentation methods.

  4. Occupational Opportunities for the Physically Handicapped. Part B. Manual.

    ERIC Educational Resources Information Center

    Uthe, Elaine F.

    This manual presents the master lists of 206 job titles of 167 different Dictionary of Occupational Titles (DOT) code numbers which were held by physically handicapped graduates/completers of vocational programs as determined by a business and industry survey and graduate followup. (The project itself is reported in CE 026 163; survey and followup…

  5. Defining datasets and creating data dictionaries for quality improvement and research in chronic disease using routinely collected data: an ontology-driven approach.

    PubMed

    de Lusignan, Simon; Liaw, Siaw-Teng; Michalakidis, Georgios; Jones, Simon

    2011-01-01

    The burden of chronic disease is increasing, and research and quality improvement will be less effective if case finding strategies are suboptimal. To describe an ontology-driven approach to case finding in chronic disease and how this approach can be used to create a data dictionary and make the codes used in case finding transparent. A five-step process: (1) identifying a reference coding system or terminology; (2) using an ontology-driven approach to identify cases; (3) developing metadata that can be used to identify the extracted data; (4) mapping the extracted data to the reference terminology; and (5) creating the data dictionary. Hypertension is presented as an exemplar. A patient with hypertension can be represented by a range of codes including diagnostic, history and administrative. Metadata can link the coding system and data extraction queries to the correct data mapping and translation tool, which then maps it to the equivalent code in the reference terminology. The code extracted, the term, its domain and subdomain, and the name of the data extraction query can then be automatically grouped and published online as a readily searchable data dictionary. An exemplar online is: www.clininf.eu/qickd-data-dictionary.html Adopting an ontology-driven approach to case finding could improve the quality of disease registers and of research based on routine data. It would offer considerable advantages over using limited datasets to define cases. This approach should be considered by those involved in research and quality improvement projects which utilise routine data.

  6. DOCU-TEXT: A tool before the data dictionary

    NASA Technical Reports Server (NTRS)

    Carter, B.

    1983-01-01

    DOCU-TEXT, a proprietary software package that aids in the production of documentation for a data processing organization and can be installed and operated only on IBM computers is discussed. In organizing information that ultimately will reside in a data dictionary, DOCU-TEXT proved to be a useful documentation tool in extracting information from existing production jobs, procedure libraries, system catalogs, control data sets and related files. DOCU-TEXT reads these files to derive data that is useful at the system level. The output of DOCU-TEXT is a series of user selectable reports. These reports can reflect the interactions within a single job stream, a complete system, or all the systems in an installation. Any single report, or group of reports, can be generated in an independent documentation pass.

  7. Medical and dermatology dictionaries: an examination of unstructured definitions and a proposal for the future.

    PubMed

    DeVries, David Todd; Papier, Art; Byrnes, Jennifer; Goldsmith, Lowell A

    2004-01-01

    Medical dictionaries serve to describe and clarify the term set used by medical professionals. In this commentary, we analyze a representative set of skin disease definitions from 2 prominent medical dictionaries, Stedman's Medical Dictionary and Dorland's Illustrated Medical Dictionary. We find that there is an apparent lack of stylistic standards with regard to content and form. We advocate a new standard form for the definition of medical terminology, a standard to complement the easy-to-read yet unstructured style of the traditional dictionary entry. This new form offers a reproducible structure, paving the way for the development of a computer readable "dictionary" of medical terminology. Such a dictionary offers immediate update capability and a fundamental improvement in the ability to search for relationships between terms.

  8. Data-Dictionary-Editing Program

    NASA Technical Reports Server (NTRS)

    Cumming, A. P.

    1989-01-01

    Access to data-dictionary relations and attributes made more convenient. Data Dictionary Editor (DDE) application program provides more convenient read/write access to data-dictionary table ("descriptions table") via data screen using SMARTQUERY function keys. Provides three main advantages: (1) User works with table names and field names rather than with table numbers and field numbers, (2) Provides online access to definitions of data-dictionary keys, and (3) Provides displayed summary list that shows, for each datum, which data-dictionary entries currently exist for any specific relation or attribute. Computer program developed to give developers of data bases more convenient access to the OMNIBASE VAX/IDM data-dictionary relations and attributes.

  9. [Physician Emile Littré, French translator and publisher of Hippocrates].

    PubMed

    Frøland, Anders

    2006-01-01

    To-day, the French author and scholar Emile Littré (1801-1881) is best known as the founder of a widely used dictionary of the French language. He was one of the most diligent French authors in the nineteenth century and had a huge knowledge of modern and ancient languages, medicine, science, history, and philosophy. Apart from the dictionary, his most impressive work was the edition and translation of the complete collection of the Hippocratic writings (1839-61). The translation was meant to serve as a textbook for French doctors, but the rapid development in medicine made it obsolete in that respect before it was completed. Instead it is now a philological and historical monument. Littré also published a large number of books and articles on positivism, history, politics, philology, and medicine. He was politically active as supporter of the French republic during the periods of monarchy and was elected lifelong senator of the French National Assembly after the 1870-71 war. He was elected member of the French Academy in spite of intense opposition from the Roman Catholic Church. He was an atheist, but was baptised on his deathbed by his wife. His edition of the Hippocratic writings still remains the only complete collection in Greek and a modern language.

  10. A Robust Shape Reconstruction Method for Facial Feature Point Detection.

    PubMed

    Tan, Shuqiu; Chen, Dongyi; Guo, Chenggang; Huang, Zhiqi

    2017-01-01

    Facial feature point detection has been receiving great research advances in recent years. Numerous methods have been developed and applied in practical face analysis systems. However, it is still a quite challenging task because of the large variability in expression and gestures and the existence of occlusions in real-world photo shoot. In this paper, we present a robust sparse reconstruction method for the face alignment problems. Instead of a direct regression between the feature space and the shape space, the concept of shape increment reconstruction is introduced. Moreover, a set of coupled overcomplete dictionaries termed the shape increment dictionary and the local appearance dictionary are learned in a regressive manner to select robust features and fit shape increments. Additionally, to make the learned model more generalized, we select the best matched parameter set through extensive validation tests. Experimental results on three public datasets demonstrate that the proposed method achieves a better robustness over the state-of-the-art methods.

  11. Blind image quality assessment via probabilistic latent semantic analysis.

    PubMed

    Yang, Xichen; Sun, Quansen; Wang, Tianshu

    2016-01-01

    We propose a blind image quality assessment that is highly unsupervised and training free. The new method is based on the hypothesis that the effect caused by distortion can be expressed by certain latent characteristics. Combined with probabilistic latent semantic analysis, the latent characteristics can be discovered by applying a topic model over a visual word dictionary. Four distortion-affected features are extracted to form the visual words in the dictionary: (1) the block-based local histogram; (2) the block-based local mean value; (3) the mean value of contrast within a block; (4) the variance of contrast within a block. Based on the dictionary, the latent topics in the images can be discovered. The discrepancy between the frequency of the topics in an unfamiliar image and a large number of pristine images is applied to measure the image quality. Experimental results for four open databases show that the newly proposed method correlates well with human subjective judgments of diversely distorted images.

  12. A sparse representation of the pathologist's interaction with whole slide images to improve the assigned relevance of regions of interest

    NASA Astrophysics Data System (ADS)

    Santiago, Daniel; Corredor, Germán.; Romero, Eduardo

    2017-11-01

    During a diagnosis task, a Pathologist looks over a Whole Slide Image (WSI), aiming to find out relevant pathological patterns. Nonetheless, a virtual microscope captures these structures, but also other cellular patterns with different or none diagnostic meaning. Annotation of these images depends on manual delineation, which in practice becomes a hard task. This article contributes a new method for detecting relevant regions in WSI using the routine navigations in a virtual microscope. This method constructs a sparse representation or dictionary of each navigation path and determines the hidden relevance by maximizing the incoherence between several paths. The resulting dictionaries are then projected onto each other and relevant information is set to the dictionary atoms whose similarity is higher than a custom threshold. Evaluation was performed with 6 pathological images segmented from a skin biopsy already diagnosed with basal cell carcinoma (BCC). Results show that our proposal outperforms the baseline by more than 20%.

  13. Robust multi-atlas label propagation by deep sparse representation

    PubMed Central

    Zu, Chen; Wang, Zhengxia; Zhang, Daoqiang; Liang, Peipeng; Shi, Yonghong; Shen, Dinggang; Wu, Guorong

    2016-01-01

    Recently, multi-atlas patch-based label fusion has achieved many successes in medical imaging area. The basic assumption in the current state-of-the-art approaches is that the image patch at the target image point can be represented by a patch dictionary consisting of atlas patches from registered atlas images. Therefore, the label at the target image point can be determined by fusing labels of atlas image patches with similar anatomical structures. However, such assumption on image patch representation does not always hold in label fusion since (1) the image content within the patch may be corrupted due to noise and artifact; and (2) the distribution of morphometric patterns among atlas patches might be unbalanced such that the majority patterns can dominate label fusion result over other minority patterns. The violation of the above basic assumptions could significantly undermine the label fusion accuracy. To overcome these issues, we first consider forming label-specific group for the atlas patches with the same label. Then, we alter the conventional flat and shallow dictionary to a deep multi-layer structure, where the top layer (label-specific dictionaries) consists of groups of representative atlas patches and the subsequent layers (residual dictionaries) hierarchically encode the patchwise residual information in different scales. Thus, the label fusion follows the representation consensus across representative dictionaries. However, the representation of target patch in each group is iteratively optimized by using the representative atlas patches in each label-specific dictionary exclusively to match the principal patterns and also using all residual patterns across groups collaboratively to overcome the issue that some groups might be absent of certain variation patterns presented in the target image patch. Promising segmentation results have been achieved in labeling hippocampus on ADNI dataset, as well as basal ganglia and brainstem structures, compared to other counterpart label fusion methods. PMID:27942077

  14. Robust multi-atlas label propagation by deep sparse representation.

    PubMed

    Zu, Chen; Wang, Zhengxia; Zhang, Daoqiang; Liang, Peipeng; Shi, Yonghong; Shen, Dinggang; Wu, Guorong

    2017-03-01

    Recently, multi-atlas patch-based label fusion has achieved many successes in medical imaging area. The basic assumption in the current state-of-the-art approaches is that the image patch at the target image point can be represented by a patch dictionary consisting of atlas patches from registered atlas images. Therefore, the label at the target image point can be determined by fusing labels of atlas image patches with similar anatomical structures. However, such assumption on image patch representation does not always hold in label fusion since (1) the image content within the patch may be corrupted due to noise and artifact; and (2) the distribution of morphometric patterns among atlas patches might be unbalanced such that the majority patterns can dominate label fusion result over other minority patterns. The violation of the above basic assumptions could significantly undermine the label fusion accuracy. To overcome these issues, we first consider forming label-specific group for the atlas patches with the same label. Then, we alter the conventional flat and shallow dictionary to a deep multi-layer structure, where the top layer ( label-specific dictionaries ) consists of groups of representative atlas patches and the subsequent layers ( residual dictionaries ) hierarchically encode the patchwise residual information in different scales. Thus, the label fusion follows the representation consensus across representative dictionaries. However, the representation of target patch in each group is iteratively optimized by using the representative atlas patches in each label-specific dictionary exclusively to match the principal patterns and also using all residual patterns across groups collaboratively to overcome the issue that some groups might be absent of certain variation patterns presented in the target image patch. Promising segmentation results have been achieved in labeling hippocampus on ADNI dataset, as well as basal ganglia and brainstem structures, compared to other counterpart label fusion methods.

  15. Emo, love and god: making sense of Urban Dictionary, a crowd-sourced online dictionary.

    PubMed

    Nguyen, Dong; McGillivray, Barbara; Yasseri, Taha

    2018-05-01

    The Internet facilitates large-scale collaborative projects and the emergence of Web 2.0 platforms, where producers and consumers of content unify, has drastically changed the information market. On the one hand, the promise of the 'wisdom of the crowd' has inspired successful projects such as Wikipedia, which has become the primary source of crowd-based information in many languages. On the other hand, the decentralized and often unmonitored environment of such projects may make them susceptible to low-quality content. In this work, we focus on Urban Dictionary, a crowd-sourced online dictionary. We combine computational methods with qualitative annotation and shed light on the overall features of Urban Dictionary in terms of growth, coverage and types of content. We measure a high presence of opinion-focused entries, as opposed to the meaning-focused entries that we expect from traditional dictionaries. Furthermore, Urban Dictionary covers many informal, unfamiliar words as well as proper nouns. Urban Dictionary also contains offensive content, but highly offensive content tends to receive lower scores through the dictionary's voting system. The low threshold to include new material in Urban Dictionary enables quick recording of new words and new meanings, but the resulting heterogeneous content can pose challenges in using Urban Dictionary as a source to study language innovation.

  16. A Study of Comparatively Low Achievement Students' Bilingualized Dictionary Use and Their English Learning

    ERIC Educational Resources Information Center

    Chen, Szu-An

    2016-01-01

    This study investigates bilingualized dictionary use of Taiwanese university students. It aims to examine EFL learners' overall dictionary use behavior and their perspectives on book dictionary as well as the necessity of advance guidance in using dictionaries. Data was collected through questionnaires and analyzed by SPSS 15.0. Findings indicate…

  17. Using Different Types of Dictionaries for Improving EFL Reading Comprehension and Vocabulary Learning

    ERIC Educational Resources Information Center

    Alharbi, Majed A.

    2016-01-01

    This study investigated the effects of monolingual book dictionaries, popup dictionaries, and type-in dictionaries on improving reading comprehension and vocabulary learning in an EFL program. An experimental design involving four groups and a post-test was chosen for the experiment: (1) pop-up dictionary (experimental group 1); (2) type-in…

  18. Students Working with an English Learners' Dictionary on CD-ROM.

    ERIC Educational Resources Information Center

    Winkler, Birgit

    This paper examines the growing literature on pedagogical lexicography and the growing focus on how well the learner uses the dictionary in second language learning. Dictionaries are becoming more user-friendly. This study used the writing task to reveal new insights into how students use a CD-ROM dictionary. It found a lack of dictionary-using…

  19. The Effects of Dictionary Use on the Vocabulary Learning Strategies Used by Language Learners of Spanish.

    ERIC Educational Resources Information Center

    Hsien-jen, Chin

    This study investigated the effects of dictionary use on the vocabulary learning strategies used by intermediate college-level Spanish learners to understand new vocabulary items in a reading test. Participants were randomly assigned to one of three groups: control (without a dictionary), bilingual dictionary (using a Spanish-English dictionary),…

  20. Resilience - A Concept

    DTIC Science & Technology

    2016-04-05

    dictionary ]. Retrieved from http://www.investopedia.com/terms/b/blackbox.asp Bodeau, D., Brtis, J., Graubart, R., & Salwen, J. (2013). Resiliency...techniques for systems-of-systems (Report No. 13-3513). Bedford, MA: The MITRE Corporation. Confidence, (n.d.). In Oxford dictionaries [Online dictionary ...Acquisition, Technology and Logistics. Holistic Strategy Approach. (n.d.). In BusinessDictionary.com [Online business dictionary ]. Retrieved from http

  1. Improving the Incoherence of a Learned Dictionary via Rank Shrinkage.

    PubMed

    Ubaru, Shashanka; Seghouane, Abd-Krim; Saad, Yousef

    2017-01-01

    This letter considers the problem of dictionary learning for sparse signal representation whose atoms have low mutual coherence. To learn such dictionaries, at each step, we first update the dictionary using the method of optimal directions (MOD) and then apply a dictionary rank shrinkage step to decrease its mutual coherence. In the rank shrinkage step, we first compute a rank 1 decomposition of the column-normalized least squares estimate of the dictionary obtained from the MOD step. We then shrink the rank of this learned dictionary by transforming the problem of reducing the rank to a nonnegative garrotte estimation problem and solving it using a path-wise coordinate descent approach. We establish theoretical results that show that the rank shrinkage step included will reduce the coherence of the dictionary, which is further validated by experimental results. Numerical experiments illustrating the performance of the proposed algorithm in comparison to various other well-known dictionary learning algorithms are also presented.

  2. The Making of the "Oxford English Dictionary."

    ERIC Educational Resources Information Center

    Winchester, Simon

    2003-01-01

    Summarizes remarks made to open the Gallaudet University conference on Dictionaries and the Standardization of languages. It concerns the making of what is arguably the world's greatest dictionary, "The Oxford English Dictionary." (VWL)

  3. A novel structured dictionary for fast processing of 3D medical images, with application to computed tomography restoration and denoising

    NASA Astrophysics Data System (ADS)

    Karimi, Davood; Ward, Rabab K.

    2016-03-01

    Sparse representation of signals in learned overcomplete dictionaries has proven to be a powerful tool with applications in denoising, restoration, compression, reconstruction, and more. Recent research has shown that learned overcomplete dictionaries can lead to better results than analytical dictionaries such as wavelets in almost all image processing applications. However, a major disadvantage of these dictionaries is that their learning and usage is very computationally intensive. In particular, finding the sparse representation of a signal in these dictionaries requires solving an optimization problem that leads to very long computational times, especially in 3D image processing. Moreover, the sparse representation found by greedy algorithms is usually sub-optimal. In this paper, we propose a novel two-level dictionary structure that improves the performance and the speed of standard greedy sparse coding methods. The first (i.e., the top) level in our dictionary is a fixed orthonormal basis, whereas the second level includes the atoms that are learned from the training data. We explain how such a dictionary can be learned from the training data and how the sparse representation of a new signal in this dictionary can be computed. As an application, we use the proposed dictionary structure for removing the noise and artifacts in 3D computed tomography (CT) images. Our experiments with real CT images show that the proposed method achieves results that are comparable with standard dictionary-based methods while substantially reducing the computational time.

  4. Antibiotics: When They Can and Can't Help

    MedlinePlus

    ... bacteria and yeasts) found in your intestines. Similar probiotics are available in some foods and…How to ... Your Doctor Drugs, Procedures & Devices Over-the-counter Products Procedures & Devices Prescription Medicines Health Tools Dictionary Symptom ...

  5. Polarimetric SAR image classification based on discriminative dictionary learning model

    NASA Astrophysics Data System (ADS)

    Sang, Cheng Wei; Sun, Hong

    2018-03-01

    Polarimetric SAR (PolSAR) image classification is one of the important applications of PolSAR remote sensing. It is a difficult high-dimension nonlinear mapping problem, the sparse representations based on learning overcomplete dictionary have shown great potential to solve such problem. The overcomplete dictionary plays an important role in PolSAR image classification, however for PolSAR image complex scenes, features shared by different classes will weaken the discrimination of learned dictionary, so as to degrade classification performance. In this paper, we propose a novel overcomplete dictionary learning model to enhance the discrimination of dictionary. The learned overcomplete dictionary by the proposed model is more discriminative and very suitable for PolSAR classification.

  6. The ABCs of Data Dictionaries

    ERIC Educational Resources Information Center

    Gould, Tate; Nicholas, Amy; Blandford, William; Ruggiero, Tony; Peters, Mary; Thayer, Sara

    2014-01-01

    This overview of the basic components of a data dictionary is designed to educate and inform IDEA Part C and Part B 619 state staff about the purpose and benefits of having up-to-date data dictionaries for their data systems. This report discusses the following topics: (1) What Is a Data Dictionary?; (2) Why Is a Data Dictionary Needed and How Can…

  7. Evaluating Online Dictionaries From Faculty Prospective: A Case Study Performed On English Faculty Members At King Saud University--Wadi Aldawaser Branch

    ERIC Educational Resources Information Center

    Abouserie, Hossam Eldin Mohamed Refaat

    2010-01-01

    The purpose of this study was to evaluate online dictionaries from faculty prospective. The study tried to obtain in depth information about various forms of dictionaries the faculty used; degree of awareness and accessing online dictionaries; types of online dictionaries accessed; basic features of information provided; major benefits gained…

  8. Psychological Type and Sex Differences among Church Leaders in the United Kingdom

    ERIC Educational Resources Information Center

    Craig, Charlotte L.; Francis, Leslie J.; Robbins, Mandy

    2004-01-01

    A sample of 135 female and 164 male church leaders of mixed denominations completed the Francis Psychological Type Scales. The female church leaders demonstrated clear preferences for extraversion over introversion, for sensing over intuition, for feeling over thinking, and for judging over perceiving. The male church leaders demonstrated clear…

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Q; Han, H; Xing, L

    Purpose: Dictionary learning based method has attracted more and more attentions in low-dose CT due to the superior performance on suppressing noise and preserving structural details. Considering the structures and noise vary from region to region in one imaging object, we propose a region-specific dictionary learning method to improve the low-dose CT reconstruction. Methods: A set of normal-dose images was used for dictionary learning. Segmentations were performed on these images, so that the training patch sets corresponding to different regions can be extracted out. After that, region-specific dictionaries were learned from these training sets. For the low-dose CT reconstruction, amore » conventional reconstruction, such as filtered back-projection (FBP), was performed firstly, and then segmentation was followed to segment the image into different regions. Sparsity constraints of each region based on its dictionary were used as regularization terms. The regularization parameters were selected adaptively according to different regions. A low-dose human thorax dataset was used to evaluate the proposed method. The single dictionary based method was performed for comparison. Results: Since the lung region is very different from the other part of thorax, two dictionaries corresponding to lung region and the rest part of thorax respectively were learned to better express the structural details and avoid artifacts. With only one dictionary some artifact appeared in the body region caused by the spot atoms corresponding to the structures in the lung region. And also some structure in the lung regions cannot be recovered well by only one dictionary. The quantitative indices of the result by the proposed method were also improved a little compared to the single dictionary based method. Conclusion: Region-specific dictionary can make the dictionary more adaptive to different region characteristics, which is much desirable for enhancing the performance of dictionary learning based method.« less

  10. A dictionary to identify small molecules and drugs in free text.

    PubMed

    Hettne, Kristina M; Stierum, Rob H; Schuemie, Martijn J; Hendriksen, Peter J M; Schijvenaars, Bob J A; Mulligen, Erik M van; Kleinjans, Jos; Kors, Jan A

    2009-11-15

    From the scientific community, a lot of effort has been spent on the correct identification of gene and protein names in text, while less effort has been spent on the correct identification of chemical names. Dictionary-based term identification has the power to recognize the diverse representation of chemical information in the literature and map the chemicals to their database identifiers. We developed a dictionary for the identification of small molecules and drugs in text, combining information from UMLS, MeSH, ChEBI, DrugBank, KEGG, HMDB and ChemIDplus. Rule-based term filtering, manual check of highly frequent terms and disambiguation rules were applied. We tested the combined dictionary and the dictionaries derived from the individual resources on an annotated corpus, and conclude the following: (i) each of the different processing steps increase precision with a minor loss of recall; (ii) the overall performance of the combined dictionary is acceptable (precision 0.67, recall 0.40 (0.80 for trivial names); (iii) the combined dictionary performed better than the dictionary in the chemical recognizer OSCAR3; (iv) the performance of a dictionary based on ChemIDplus alone is comparable to the performance of the combined dictionary. The combined dictionary is freely available as an XML file in Simple Knowledge Organization System format on the web site http://www.biosemantics.org/chemlist.

  11. An Online Dictionary Learning-Based Compressive Data Gathering Algorithm in Wireless Sensor Networks

    PubMed Central

    Wang, Donghao; Wan, Jiangwen; Chen, Junying; Zhang, Qiang

    2016-01-01

    To adapt to sense signals of enormous diversities and dynamics, and to decrease the reconstruction errors caused by ambient noise, a novel online dictionary learning method-based compressive data gathering (ODL-CDG) algorithm is proposed. The proposed dictionary is learned from a two-stage iterative procedure, alternately changing between a sparse coding step and a dictionary update step. The self-coherence of the learned dictionary is introduced as a penalty term during the dictionary update procedure. The dictionary is also constrained with sparse structure. It’s theoretically demonstrated that the sensing matrix satisfies the restricted isometry property (RIP) with high probability. In addition, the lower bound of necessary number of measurements for compressive sensing (CS) reconstruction is given. Simulation results show that the proposed ODL-CDG algorithm can enhance the recovery accuracy in the presence of noise, and reduce the energy consumption in comparison with other dictionary based data gathering methods. PMID:27669250

  12. An Online Dictionary Learning-Based Compressive Data Gathering Algorithm in Wireless Sensor Networks.

    PubMed

    Wang, Donghao; Wan, Jiangwen; Chen, Junying; Zhang, Qiang

    2016-09-22

    To adapt to sense signals of enormous diversities and dynamics, and to decrease the reconstruction errors caused by ambient noise, a novel online dictionary learning method-based compressive data gathering (ODL-CDG) algorithm is proposed. The proposed dictionary is learned from a two-stage iterative procedure, alternately changing between a sparse coding step and a dictionary update step. The self-coherence of the learned dictionary is introduced as a penalty term during the dictionary update procedure. The dictionary is also constrained with sparse structure. It's theoretically demonstrated that the sensing matrix satisfies the restricted isometry property (RIP) with high probability. In addition, the lower bound of necessary number of measurements for compressive sensing (CS) reconstruction is given. Simulation results show that the proposed ODL-CDG algorithm can enhance the recovery accuracy in the presence of noise, and reduce the energy consumption in comparison with other dictionary based data gathering methods.

  13. Core Standards of the EUBIROD Project. Defining a European Diabetes Data Dictionary for Clinical Audit and Healthcare Delivery.

    PubMed

    Cunningham, S G; Carinci, F; Brillante, M; Leese, G P; McAlpine, R R; Azzopardi, J; Beck, P; Bratina, N; Bocquet, V; Doggen, K; Jarosz-Chobot, P K; Jecht, M; Lindblad, U; Moulton, T; Metelko, Ž; Nagy, A; Olympios, G; Pruna, S; Skeie, S; Storms, F; Di Iorio, C T; Massi Benedetti, M

    2016-01-01

    A set of core diabetes indicators were identified in a clinical review of current evidence for the EUBIROD project. In order to allow accurate comparisons of diabetes indicators, a standardised currency for data storage and aggregation was required. We aimed to define a robust European data dictionary with appropriate clinical definitions that can be used to analyse diabetes outcomes and provide the foundation for data collection from existing electronic health records for diabetes. Existing clinical datasets used by 15 partner institutions across Europe were collated and common data items analysed for consistency in terms of recording, data definition and units of measurement. Where necessary, data mappings and algorithms were specified in order to allow partners to meet the standard definitions. A series of descriptive elements were created to document metadata for each data item, including recording, consistency, completeness and quality. While datasets varied in terms of consistency, it was possible to create a common standard that could be used by all. The minimum dataset defined 53 data items that were classified according to their feasibility and validity. Mappings and standardised definitions were used to create an electronic directory for diabetes care, providing the foundation for the EUBIROD data analysis repository, also used to implement the diabetes registry and model of care for Cyprus. The development of data dictionaries and standards can be used to improve the quality and comparability of health information. A data dictionary has been developed to be compatible with other existing data sources for diabetes, within and beyond Europe.

  14. Developing a National-Level Concept Dictionary for EHR Implementations in Kenya.

    PubMed

    Keny, Aggrey; Wanyee, Steven; Kwaro, Daniel; Mulwa, Edwin; Were, Martin C

    2015-01-01

    The increasing adoption of Electronic Health Records (EHR) by developing countries comes with the need to develop common terminology standards to assure semantic interoperability. In Kenya, where the Ministry of Health has rolled out an EHR at 646 sites, several challenges have emerged including variable dictionaries across implementations, inability to easily share data across systems, lack of expertise in dictionary management, lack of central coordination and custody of a terminology service, inadequately defined policies and processes, insufficient infrastructure, among others. A Concept Working Group was constituted to address these challenges. The country settled on a common Kenya data dictionary, initially derived as a subset of the Columbia International eHealth Laboratory (CIEL)/Millennium Villages Project (MVP) dictionary. The initial dictionary scope largely focuses on clinical needs. Processes and policies around dictionary management are being guided by the framework developed by Bakhshi-Raiez et al. Technical and infrastructure-based approaches are also underway to streamline workflow for dictionary management and distribution across implementations. Kenya's approach on comprehensive common dictionary can serve as a model for other countries in similar settings.

  15. Specifications for a Federal Information Processing Standard Data Dictionary System

    NASA Technical Reports Server (NTRS)

    Goldfine, A.

    1984-01-01

    The development of a software specification that Federal agencies may use in evaluating and selecting data dictionary systems (DDS) is discussed. To supply the flexibility needed by widely different applications and environments in the Federal Government, the Federal Information Processing Standard (FIPS) specifies a core DDS together with an optimal set of modules. The focus and status of the development project are described. Functional specifications for the FIPS DDS are examined for the dictionary, the dictionary schema, and the dictionary processing system. The DDS user interfaces and DDS software interfaces are discussed as well as dictionary administration.

  16. Manifold optimization-based analysis dictionary learning with an ℓ1∕2-norm regularizer.

    PubMed

    Li, Zhenni; Ding, Shuxue; Li, Yujie; Yang, Zuyuan; Xie, Shengli; Chen, Wuhui

    2018-02-01

    Recently there has been increasing attention towards analysis dictionary learning. In analysis dictionary learning, it is an open problem to obtain the strong sparsity-promoting solutions efficiently while simultaneously avoiding the trivial solutions of the dictionary. In this paper, to obtain the strong sparsity-promoting solutions, we employ the ℓ 1∕2 norm as a regularizer. The very recent study on ℓ 1∕2 norm regularization theory in compressive sensing shows that its solutions can give sparser results than using the ℓ 1 norm. We transform a complex nonconvex optimization into a number of one-dimensional minimization problems. Then the closed-form solutions can be obtained efficiently. To avoid trivial solutions, we apply manifold optimization to update the dictionary directly on the manifold satisfying the orthonormality constraint, so that the dictionary can avoid the trivial solutions well while simultaneously capturing the intrinsic properties of the dictionary. The experiments with synthetic and real-world data verify that the proposed algorithm for analysis dictionary learning can not only obtain strong sparsity-promoting solutions efficiently, but also learn more accurate dictionary in terms of dictionary recovery and image processing than the state-of-the-art algorithms. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Emo, love and god: making sense of Urban Dictionary, a crowd-sourced online dictionary

    PubMed Central

    McGillivray, Barbara

    2018-01-01

    The Internet facilitates large-scale collaborative projects and the emergence of Web 2.0 platforms, where producers and consumers of content unify, has drastically changed the information market. On the one hand, the promise of the ‘wisdom of the crowd’ has inspired successful projects such as Wikipedia, which has become the primary source of crowd-based information in many languages. On the other hand, the decentralized and often unmonitored environment of such projects may make them susceptible to low-quality content. In this work, we focus on Urban Dictionary, a crowd-sourced online dictionary. We combine computational methods with qualitative annotation and shed light on the overall features of Urban Dictionary in terms of growth, coverage and types of content. We measure a high presence of opinion-focused entries, as opposed to the meaning-focused entries that we expect from traditional dictionaries. Furthermore, Urban Dictionary covers many informal, unfamiliar words as well as proper nouns. Urban Dictionary also contains offensive content, but highly offensive content tends to receive lower scores through the dictionary’s voting system. The low threshold to include new material in Urban Dictionary enables quick recording of new words and new meanings, but the resulting heterogeneous content can pose challenges in using Urban Dictionary as a source to study language innovation. PMID:29892417

  18. Dictionary Based Machine Translation from Kannada to Telugu

    NASA Astrophysics Data System (ADS)

    Sindhu, D. V.; Sagar, B. M.

    2017-08-01

    Machine Translation is a task of translating from one language to another language. For the languages with less linguistic resources like Kannada and Telugu Dictionary based approach is the best approach. This paper mainly focuses on Dictionary based machine translation for Kannada to Telugu. The proposed methodology uses dictionary for translating word by word without much correlation of semantics between them. The dictionary based machine translation process has the following sub process: Morph analyzer, dictionary, transliteration, transfer grammar and the morph generator. As a part of this work bilingual dictionary with 8000 entries is developed and the suffix mapping table at the tag level is built. This system is tested for the children stories. In near future this system can be further improved by defining transfer grammar rules.

  19. The efficacy of dictionary use while reading for learning new words.

    PubMed

    Hamilton, Harley

    2012-01-01

    The researcher investigated the use of three types of dictionaries while reading by high school students with severe to profound hearing loss. The objective of the study was to determine the effectiveness of each type of dictionary for acquiring the meanings of unknown vocabulary in text. The three types of dictionaries were (a) an online bilingual multimedia English-American Sign Language (ASL) dictionary (OBMEAD), (b) a paper English-ASL dictionary (PBEAD), and (c) an online monolingual English dictionary (OMED). It was found that for immediate recall of target words, the OBMEAD was superior to both the PBEAD and the OMED. For later recall, no significant difference appeared between the OBMEAD and the PBEAD. For both of these, recall was statistically superior to recall for words learned via the OMED.

  20. What Online Traditional Medicine Dictionaries Bring To English Speakers Now? Concepts or Equivalents?

    PubMed Central

    Fang, Lu

    2018-01-01

    Nowadays, more and more Chinese medicine practices are applied in the world and popularizing that becomes an urgent task. To meet the requiremets, an increasing number of Chinese - English traditional medicine dictionaries have been produced at home or abroad in recent decades. Nevertheless, the users are still struggling to spot the information in dictionaries. What traditional medicine dictionaries are needed for the English speakers now? To identify an entry model for online TCM dictionaries, I compared the entries in five printed traditional medicine dictionaries and two online ones. Based upon this, I tentatively put forward two samples, “阳经 (yángjīng)” and “阴经 (yīn jīng)”, focusing on concepts transmitting, for online Chinese - English TCM dictionaries. PMID:29875861

  1. Label consistent K-SVD: learning a discriminative dictionary for recognition.

    PubMed

    Jiang, Zhuolin; Lin, Zhe; Davis, Larry S

    2013-11-01

    A label consistent K-SVD (LC-KSVD) algorithm to learn a discriminative dictionary for sparse coding is presented. In addition to using class labels of training data, we also associate label information with each dictionary item (columns of the dictionary matrix) to enforce discriminability in sparse codes during the dictionary learning process. More specifically, we introduce a new label consistency constraint called "discriminative sparse-code error" and combine it with the reconstruction error and the classification error to form a unified objective function. The optimal solution is efficiently obtained using the K-SVD algorithm. Our algorithm learns a single overcomplete dictionary and an optimal linear classifier jointly. The incremental dictionary learning algorithm is presented for the situation of limited memory resources. It yields dictionaries so that feature points with the same class labels have similar sparse codes. Experimental results demonstrate that our algorithm outperforms many recently proposed sparse-coding techniques for face, action, scene, and object category recognition under the same learning conditions.

  2. Chinese-English Nuclear and Physics Dictionary.

    ERIC Educational Resources Information Center

    Air Force Systems Command, Wright-Patterson AFB, OH. Foreign Technology Div.

    The Nuclear and Physics Dictionary is one of a series of Chinese-English technical dictionaries prepared by the Foreign Technology Division, United States Air Force Systems Command. The purpose of this dictionary is to provide rapid reference tools for translators, abstractors, and research analysts concerned with scientific and technical…

  3. Mandarin Chinese Dictionary: English-Chinese.

    ERIC Educational Resources Information Center

    Wang, Fred Fangyu

    This dictionary is a companion volume to the "Mandarin Chinese Dictionary (Chinese-English)" published in 1967 by Seton Hall University. The purpose of the dictionary is to help English-speaking students produce Chinese sentences in certain cultural situations by looking up the English expressions. Natural, spoken Chinese expressions within the…

  4. Intertwining thesauri and dictionaries

    NASA Technical Reports Server (NTRS)

    Buchan, R. L.

    1989-01-01

    The use of dictionaries and thesauri in information retrieval is discussed. The structure and functions of thesauri and dictionaries are described. Particular attention is given to the format of the NASA Thesaurus. The relationship between thesauri and dictionaries, the need to regularize terminology, and the capitalization of words are examined.

  5. MEANING DISCRIMINATION IN BILINGUAL DICTIONARIES.

    ERIC Educational Resources Information Center

    IANNUCCI, JAMES E.

    SEMANTIC DISCRIMINATION OF POLYSEMOUS ENTRY WORDS IN BILINGUAL DICTIONARIES WAS DISCUSSED IN THE PAPER. HANDICAPS OF PRESENT BILINGUAL DICTIONARIES AND BARRIERS TO THEIR FULL UTILIZATION WERE ENUMERATED. THE AUTHOR CONCLUDED THAT (1) A BILINGUAL DICTIONARY SHOULD HAVE A DISCRIMINATION FOR EVERY TRANSLATION OF AN ENTRY WORD WHICH HAS SEVERAL…

  6. The Use of Hyper-Reference and Conventional Dictionaries.

    ERIC Educational Resources Information Center

    Aust, Ronald; And Others

    1993-01-01

    Describes a study of 80 undergraduate foreign language learners that compared the use of a hyper-reference source incorporating an electronic dictionary and a conventional paper dictionary. Measures of consultation frequency, study time, efficiency, and comprehension are examined; bilingual and monolingual dictionary use is compared; and further…

  7. Medicine in Dr Samuel Johnson's Dictionary of the English Language.

    PubMed

    Sharma, Om P

    2011-11-01

    When compiling the Dictionary of the English Language, Johnson read and annotated over two hundred thousand passages from innumerable English authors of various disciplines across four centuries. Most of the literary anecdotes came from Shakespeare, Milton, Dryden and Pope. The medical and scientific anecdotes came from 31 scientists, physicians, pharmacologists and surgeons. This reflects Johnson's admiration for science and its benefit to the public. He told Boswell, 'Why Sir, if you have but one book with you upon a journey let it be a book of science. When you read through a book of entertainment, you know it, and it can do no more for you, but a book of science is inexhaustible'.

  8. Toward better public health reporting using existing off the shelf approaches: The value of medical dictionaries in automated cancer detection using plaintext medical data.

    PubMed

    Kasthurirathne, Suranga N; Dixon, Brian E; Gichoya, Judy; Xu, Huiping; Xia, Yuni; Mamlin, Burke; Grannis, Shaun J

    2017-05-01

    Existing approaches to derive decision models from plaintext clinical data frequently depend on medical dictionaries as the sources of potential features. Prior research suggests that decision models developed using non-dictionary based feature sourcing approaches and "off the shelf" tools could predict cancer with performance metrics between 80% and 90%. We sought to compare non-dictionary based models to models built using features derived from medical dictionaries. We evaluated the detection of cancer cases from free text pathology reports using decision models built with combinations of dictionary or non-dictionary based feature sourcing approaches, 4 feature subset sizes, and 5 classification algorithms. Each decision model was evaluated using the following performance metrics: sensitivity, specificity, accuracy, positive predictive value, and area under the receiver operating characteristics (ROC) curve. Decision models parameterized using dictionary and non-dictionary feature sourcing approaches produced performance metrics between 70 and 90%. The source of features and feature subset size had no impact on the performance of a decision model. Our study suggests there is little value in leveraging medical dictionaries for extracting features for decision model building. Decision models built using features extracted from the plaintext reports themselves achieve comparable results to those built using medical dictionaries. Overall, this suggests that existing "off the shelf" approaches can be leveraged to perform accurate cancer detection using less complex Named Entity Recognition (NER) based feature extraction, automated feature selection and modeling approaches. Copyright © 2017 Elsevier Inc. All rights reserved.

  9. The Oxford English Dictionary: A Brief History.

    ERIC Educational Resources Information Center

    Fritze, Ronald H.

    1989-01-01

    Reviews the development of English dictionaries in general and the Oxford English Dictionary (OED) in particular. The discussion covers the decision by the Philological Society to create the dictionary, the principles that guided its development, the involvement of James Augustus Henry Murray, the magnitude and progress of the project, and the…

  10. Dictionary Making: A Case of Kiswahili Dictionaries.

    ERIC Educational Resources Information Center

    Mohamed, Mohamed A.

    Two Swahili dictionaries and two bilingual dictionaries by the same author (one English-Swahili and one Swahili-English) are evaluated for their form and content, with illustrations offered from each. Aspects examined include: the compilation of headwords, including their meanings with relation to basic and extended meanings; treatment of…

  11. Buying and Selling Words: What Every Good Librarian Should Know about the Dictionary Business.

    ERIC Educational Resources Information Center

    Kister, Ken

    1993-01-01

    Discusses features to consider when selecting dictionaries. Topics addressed include the publishing industry; the dictionary market; profits from dictionaries; pricing; competitive marketing tactics, including similar titles, claims to numbers of entries and numbers of definitions, and similar physical appearance; a trademark infringement case;…

  12. The New Unabridged English-Persian Dictionary.

    ERIC Educational Resources Information Center

    Aryanpur, Abbas; Saleh, Jahan Shah

    This five-volume English-Persian dictionary is based on Webster's International Dictionary (1960 and 1961) and The Shorter Oxford English Dictionary (1959); it attempts to provide Persian equivalents of all the words of Oxford and all the key-words of Webster. Pronunciation keys for the English phonetic transcription and for the difficult Persian…

  13. Evaluating L2 Readers' Vocabulary Strategies and Dictionary Use

    ERIC Educational Resources Information Center

    Prichard, Caleb

    2008-01-01

    A review of the relevant literature concerning second language dictionary use while reading suggests that selective dictionary use may lead to improved comprehension and efficient vocabulary development. This study aims to examine the dictionary use of Japanese university students to determine just how selective they are when reading nonfiction…

  14. Online English-English Learner Dictionaries Boost Word Learning

    ERIC Educational Resources Information Center

    Nurmukhamedov, Ulugbek

    2012-01-01

    Learners of English might be familiar with several online monolingual dictionaries that are not necessarily the best choices for the English as Second/Foreign Language (ESL/EFL) context. Although these monolingual online dictionaries contain definitions, pronunciation guides, and other elements normally found in general-use dictionaries, they are…

  15. Research Timeline: Dictionary Use by English Language Learners

    ERIC Educational Resources Information Center

    Nesi, Hilary

    2014-01-01

    The history of research into dictionary use tends to be characterised by small-scale studies undertaken in a variety of different contexts, rather than larger-scale, longer-term funded projects. The research conducted by dictionary publishers is not generally made public, because of its commercial sensitivity, yet because dictionary production is…

  16. The Dictionary and Vocabulary Behavior: A Single Word or a Handful?

    ERIC Educational Resources Information Center

    Baxter, James

    1980-01-01

    To provide a context for dictionary selection, the vocabulary behavior of students is examined. Distinguishing between written and spoken English, the relation between dictionary use, classroom vocabulary behavior, and students' success in meeting their communicative needs is discussed. The choice of a monolingual English learners' dictionary is…

  17. Size-Dictionary Interpolation for Robot's Adjustment.

    PubMed

    Daneshmand, Morteza; Aabloo, Alvo; Anbarjafari, Gholamreza

    2015-01-01

    This paper describes the classification and size-dictionary interpolation of the three-dimensional data obtained by a laser scanner to be used in a realistic virtual fitting room, where automatic activation of the chosen mannequin robot, while several mannequin robots of different genders and sizes are simultaneously connected to the same computer, is also considered to make it mimic the body shapes and sizes instantly. The classification process consists of two layers, dealing, respectively, with gender and size. The interpolation procedure tries to find out which set of the positions of the biologically inspired actuators for activation of the mannequin robots could lead to the closest possible resemblance of the shape of the body of the person having been scanned, through linearly mapping the distances between the subsequent size-templates and the corresponding position set of the bioengineered actuators, and subsequently, calculating the control measures that could maintain the same distance proportions, where minimizing the Euclidean distance between the size-dictionary template vectors and that of the desired body sizes determines the mathematical description. In this research work, the experimental results of the implementation of the proposed method on Fits.me's mannequin robots are visually illustrated, and explanation of the remaining steps toward completion of the whole realistic online fitting package is provided.

  18. Multivariate temporal dictionary learning for EEG.

    PubMed

    Barthélemy, Q; Gouy-Pailler, C; Isaac, Y; Souloumiac, A; Larue, A; Mars, J I

    2013-04-30

    This article addresses the issue of representing electroencephalographic (EEG) signals in an efficient way. While classical approaches use a fixed Gabor dictionary to analyze EEG signals, this article proposes a data-driven method to obtain an adapted dictionary. To reach an efficient dictionary learning, appropriate spatial and temporal modeling is required. Inter-channels links are taken into account in the spatial multivariate model, and shift-invariance is used for the temporal model. Multivariate learned kernels are informative (a few atoms code plentiful energy) and interpretable (the atoms can have a physiological meaning). Using real EEG data, the proposed method is shown to outperform the classical multichannel matching pursuit used with a Gabor dictionary, as measured by the representative power of the learned dictionary and its spatial flexibility. Moreover, dictionary learning can capture interpretable patterns: this ability is illustrated on real data, learning a P300 evoked potential. Copyright © 2013 Elsevier B.V. All rights reserved.

  19. Concept dictionary creation and maintenance under resource constraints: lessons from the AMPATH Medical Record System.

    PubMed

    Were, Martin C; Mamlin, Burke W; Tierney, William M; Wolfe, Ben; Biondich, Paul G

    2007-10-11

    The challenges of creating and maintaining concept dictionaries are compounded in resource-limited settings. Approaches to alleviate this burden need to be based on information derived in these settings. We created a concept dictionary and evaluated new concept proposals for an open source EMR in a resource-limited setting. Overall, 87% of the concepts in the initial dictionary were used. There were 5137 new concepts proposed, with 77% of these proposed only once. Further characterization of new concept proposals revealed that 41% were due to deficiency in the existing dictionary, and 19% were synonyms to existing concepts. 25% of the requests contained misspellings, 41% were complex terms, and 17% were ambiguous. Given the resource-intensive nature of dictionary creation and maintenance, there should be considerations for centralizing the concept dictionary service, using standards, prioritizing concept proposals, and redesigning the user-interface to reduce this burden in settings with limited resources.

  20. Concept Dictionary Creation and Maintenance Under Resource Constraints: Lessons from the AMPATH Medical Record System

    PubMed Central

    Were, Martin C.; Mamlin, Burke W.; Tierney, William M.; Wolfe, Ben; Biondich, Paul G.

    2007-01-01

    The challenges of creating and maintaining concept dictionaries are compounded in resource-limited settings. Approaches to alleviate this burden need to be based on information derived in these settings. We created a concept dictionary and evaluated new concept proposals for an open source EMR in a resource-limited setting. Overall, 87% of the concepts in the initial dictionary were used. There were 5137 new concepts proposed, with 77% of these proposed only once. Further characterization of new concept proposals revealed that 41% were due to deficiency in the existing dictionary, and 19% were synonyms to existing concepts. 25% of the requests contained misspellings, 41% were complex terms, and 17% were ambiguous. Given the resource-intensive nature of dictionary creation and maintenance, there should be considerations for centralizing the concept dictionary service, using standards, prioritizing concept proposals, and redesigning the user-interface to reduce this burden in settings with limited resources. PMID:18693945

  1. NHEXAS PHASE I ARIZONA STUDY--STANDARD OPERATING PROCEDURE FOR THE GENERATION AND OPERATION OF DATA DICTIONARIES (UA-D-4.0)

    EPA Science Inventory

    The purpose of this SOP is to provide a standard method for the writing of data dictionaries. This procedure applies to the dictionaries used during the Arizona NHEXAS project and the "Border" study. Keywords: guidelines; data dictionaries.

    The National Human Exposure Assessme...

  2. The Influence of Electronic Dictionaries on Vocabulary Knowledge Extension

    ERIC Educational Resources Information Center

    Rezaei, Mojtaba; Davoudi, Mohammad

    2016-01-01

    Vocabulary learning needs special strategies in language learning process. The use of dictionaries is a great help in vocabulary learning and nowadays the emergence of electronic dictionaries has added a new and valuable resource for vocabulary learning. The present study aims to explore the influence of Electronic Dictionaries (ED) Vs. Paper…

  3. Should Dictionaries Be Used in Translation Tests and Examinations?

    ERIC Educational Resources Information Center

    Mahmoud, Abdulmoneim

    2017-01-01

    Motivated by the conflicting views regarding the use of the dictionary in translation tests and examinations this study was intended to verify the dictionary-free vs dictionary-based translation hypotheses. The subjects were 135 Arabic-speaking male and female EFL third-year university students. A group consisting of 62 students translated a text…

  4. The Creation of Learner-Centred Dictionaries for Endangered Languages: A Rotuman Example

    ERIC Educational Resources Information Center

    Vamarasi, M.

    2014-01-01

    This article examines the creation of dictionaries for endangered languages (ELs). Though each dictionary is uniquely prepared for its users, all dictionaries should be based on sound principles of vocabulary learning, including the importance of lexical chunks, as emphasised by Michael Lewis in his "Lexical Approach." Many of the…

  5. Marks, Spaces and Boundaries: Punctuation (and Other Effects) in the Typography of Dictionaries

    ERIC Educational Resources Information Center

    Luna, Paul

    2011-01-01

    Dictionary compilers and designers use punctuation to structure and clarify entries and to encode information. Dictionaries with a relatively simple structure can have simple typography and simple punctuation; as dictionaries grew more complex, and encountered the space constraints of the printed page, complex encoding systems were developed,…

  6. Evaluating Bilingual and Monolingual Dictionaries for L2 Learners.

    ERIC Educational Resources Information Center

    Hunt, Alan

    1997-01-01

    A discussion of dictionaries and their use for second language (L2) learning suggests that lack of computerized modern language corpora can adversely affect bilingual dictionaries, commonly used by L2 learners, and shows how use of such corpora has benefitted two contemporary monolingual L2 learner dictionaries (1995 editions of the Longman…

  7. Developing a hybrid dictionary-based bio-entity recognition technique.

    PubMed

    Song, Min; Yu, Hwanjo; Han, Wook-Shin

    2015-01-01

    Bio-entity extraction is a pivotal component for information extraction from biomedical literature. The dictionary-based bio-entity extraction is the first generation of Named Entity Recognition (NER) techniques. This paper presents a hybrid dictionary-based bio-entity extraction technique. The approach expands the bio-entity dictionary by combining different data sources and improves the recall rate through the shortest path edit distance algorithm. In addition, the proposed technique adopts text mining techniques in the merging stage of similar entities such as Part of Speech (POS) expansion, stemming, and the exploitation of the contextual cues to further improve the performance. The experimental results show that the proposed technique achieves the best or at least equivalent performance among compared techniques, GENIA, MESH, UMLS, and combinations of these three resources in F-measure. The results imply that the performance of dictionary-based extraction techniques is largely influenced by information resources used to build the dictionary. In addition, the edit distance algorithm shows steady performance with three different dictionaries in precision whereas the context-only technique achieves a high-end performance with three difference dictionaries in recall.

  8. Sparse dictionary for synthetic transmit aperture medical ultrasound imaging.

    PubMed

    Wang, Ping; Jiang, Jin-Yang; Li, Na; Luo, Han-Wu; Li, Fang; Cui, Shi-Gang

    2017-07-01

    It is possible to recover a signal below the Nyquist sampling limit using a compressive sensing technique in ultrasound imaging. However, the reconstruction enabled by common sparse transform approaches does not achieve satisfactory results. Considering the ultrasound echo signal's features of attenuation, repetition, and superposition, a sparse dictionary with the emission pulse signal is proposed. Sparse coefficients in the proposed dictionary have high sparsity. Images reconstructed with this dictionary were compared with those obtained with the three other common transforms, namely, discrete Fourier transform, discrete cosine transform, and discrete wavelet transform. The performance of the proposed dictionary was analyzed via a simulation and experimental data. The mean absolute error (MAE) was used to quantify the quality of the reconstructions. Experimental results indicate that the MAE associated with the proposed dictionary was always the smallest, the reconstruction time required was the shortest, and the lateral resolution and contrast of the reconstructed images were also the closest to the original images. The proposed sparse dictionary performed better than the other three sparse transforms. With the same sampling rate, the proposed dictionary achieved excellent reconstruction quality.

  9. Developing a hybrid dictionary-based bio-entity recognition technique

    PubMed Central

    2015-01-01

    Background Bio-entity extraction is a pivotal component for information extraction from biomedical literature. The dictionary-based bio-entity extraction is the first generation of Named Entity Recognition (NER) techniques. Methods This paper presents a hybrid dictionary-based bio-entity extraction technique. The approach expands the bio-entity dictionary by combining different data sources and improves the recall rate through the shortest path edit distance algorithm. In addition, the proposed technique adopts text mining techniques in the merging stage of similar entities such as Part of Speech (POS) expansion, stemming, and the exploitation of the contextual cues to further improve the performance. Results The experimental results show that the proposed technique achieves the best or at least equivalent performance among compared techniques, GENIA, MESH, UMLS, and combinations of these three resources in F-measure. Conclusions The results imply that the performance of dictionary-based extraction techniques is largely influenced by information resources used to build the dictionary. In addition, the edit distance algorithm shows steady performance with three different dictionaries in precision whereas the context-only technique achieves a high-end performance with three difference dictionaries in recall. PMID:26043907

  10. Robust Visual Tracking via Online Discriminative and Low-Rank Dictionary Learning.

    PubMed

    Zhou, Tao; Liu, Fanghui; Bhaskar, Harish; Yang, Jie

    2017-09-12

    In this paper, we propose a novel and robust tracking framework based on online discriminative and low-rank dictionary learning. The primary aim of this paper is to obtain compact and low-rank dictionaries that can provide good discriminative representations of both target and background. We accomplish this by exploiting the recovery ability of low-rank matrices. That is if we assume that the data from the same class are linearly correlated, then the corresponding basis vectors learned from the training set of each class shall render the dictionary to become approximately low-rank. The proposed dictionary learning technique incorporates a reconstruction error that improves the reliability of classification. Also, a multiconstraint objective function is designed to enable active learning of a discriminative and robust dictionary. Further, an optimal solution is obtained by iteratively computing the dictionary, coefficients, and by simultaneously learning the classifier parameters. Finally, a simple yet effective likelihood function is implemented to estimate the optimal state of the target during tracking. Moreover, to make the dictionary adaptive to the variations of the target and background during tracking, an online update criterion is employed while learning the new dictionary. Experimental results on a publicly available benchmark dataset have demonstrated that the proposed tracking algorithm performs better than other state-of-the-art trackers.

  11. Cross-View Action Recognition via Transferable Dictionary Learning.

    PubMed

    Zheng, Jingjing; Jiang, Zhuolin; Chellappa, Rama

    2016-05-01

    Discriminative appearance features are effective for recognizing actions in a fixed view, but may not generalize well to a new view. In this paper, we present two effective approaches to learn dictionaries for robust action recognition across views. In the first approach, we learn a set of view-specific dictionaries where each dictionary corresponds to one camera view. These dictionaries are learned simultaneously from the sets of correspondence videos taken at different views with the aim of encouraging each video in the set to have the same sparse representation. In the second approach, we additionally learn a common dictionary shared by different views to model view-shared features. This approach represents the videos in each view using a view-specific dictionary and the common dictionary. More importantly, it encourages the set of videos taken from the different views of the same action to have the similar sparse representations. The learned common dictionary not only has the capability to represent actions from unseen views, but also makes our approach effective in a semi-supervised setting where no correspondence videos exist and only a few labeled videos exist in the target view. The extensive experiments using three public datasets demonstrate that the proposed approach outperforms recently developed approaches for cross-view action recognition.

  12. Orthogonal Procrustes Analysis for Dictionary Learning in Sparse Linear Representation.

    PubMed

    Grossi, Giuliano; Lanzarotti, Raffaella; Lin, Jianyi

    2017-01-01

    In the sparse representation model, the design of overcomplete dictionaries plays a key role for the effectiveness and applicability in different domains. Recent research has produced several dictionary learning approaches, being proven that dictionaries learnt by data examples significantly outperform structured ones, e.g. wavelet transforms. In this context, learning consists in adapting the dictionary atoms to a set of training signals in order to promote a sparse representation that minimizes the reconstruction error. Finding the best fitting dictionary remains a very difficult task, leaving the question still open. A well-established heuristic method for tackling this problem is an iterative alternating scheme, adopted for instance in the well-known K-SVD algorithm. Essentially, it consists in repeating two stages; the former promotes sparse coding of the training set and the latter adapts the dictionary to reduce the error. In this paper we present R-SVD, a new method that, while maintaining the alternating scheme, adopts the Orthogonal Procrustes analysis to update the dictionary atoms suitably arranged into groups. Comparative experiments on synthetic data prove the effectiveness of R-SVD with respect to well known dictionary learning algorithms such as K-SVD, ILS-DLA and the online method OSDL. Moreover, experiments on natural data such as ECG compression, EEG sparse representation, and image modeling confirm R-SVD's robustness and wide applicability.

  13. Reconstruction of magnetic resonance imaging by three-dimensional dual-dictionary learning.

    PubMed

    Song, Ying; Zhu, Zhen; Lu, Yang; Liu, Qiegen; Zhao, Jun

    2014-03-01

    To improve the magnetic resonance imaging (MRI) data acquisition speed while maintaining the reconstruction quality, a novel method is proposed for multislice MRI reconstruction from undersampled k-space data based on compressed-sensing theory using dictionary learning. There are two aspects to improve the reconstruction quality. One is that spatial correlation among slices is used by extending the atoms in dictionary learning from patches to blocks. The other is that the dictionary-learning scheme is used at two resolution levels; i.e., a low-resolution dictionary is used for sparse coding and a high-resolution dictionary is used for image updating. Numerical experiments are carried out on in vivo 3D MR images of brains and abdomens with a variety of undersampling schemes and ratios. The proposed method (dual-DLMRI) achieves better reconstruction quality than conventional reconstruction methods, with the peak signal-to-noise ratio being 7 dB higher. The advantages of the dual dictionaries are obvious compared with the single dictionary. Parameter variations ranging from 50% to 200% only bias the image quality within 15% in terms of the peak signal-to-noise ratio. Dual-DLMRI effectively uses the a priori information in the dual-dictionary scheme and provides dramatically improved reconstruction quality. Copyright © 2013 Wiley Periodicals, Inc.

  14. Alternatively Constrained Dictionary Learning For Image Superresolution.

    PubMed

    Lu, Xiaoqiang; Yuan, Yuan; Yan, Pingkun

    2014-03-01

    Dictionaries are crucial in sparse coding-based algorithm for image superresolution. Sparse coding is a typical unsupervised learning method to study the relationship between the patches of high-and low-resolution images. However, most of the sparse coding methods for image superresolution fail to simultaneously consider the geometrical structure of the dictionary and the corresponding coefficients, which may result in noticeable superresolution reconstruction artifacts. In other words, when a low-resolution image and its corresponding high-resolution image are represented in their feature spaces, the two sets of dictionaries and the obtained coefficients have intrinsic links, which has not yet been well studied. Motivated by the development on nonlocal self-similarity and manifold learning, a novel sparse coding method is reported to preserve the geometrical structure of the dictionary and the sparse coefficients of the data. Moreover, the proposed method can preserve the incoherence of dictionary entries and provide the sparse coefficients and learned dictionary from a new perspective, which have both reconstruction and discrimination properties to enhance the learning performance. Furthermore, to utilize the model of the proposed method more effectively for single-image superresolution, this paper also proposes a novel dictionary-pair learning method, which is named as two-stage dictionary training. Extensive experiments are carried out on a large set of images comparing with other popular algorithms for the same purpose, and the results clearly demonstrate the effectiveness of the proposed sparse representation model and the corresponding dictionary learning algorithm.

  15. Deconvolving molecular signatures of interactions between microbial colonies

    PubMed Central

    Harn, Y.-C.; Powers, M. J.; Shank, E. A.; Jojic, V.

    2015-01-01

    Motivation: The interactions between microbial colonies through chemical signaling are not well understood. A microbial colony can use different molecules to inhibit or accelerate the growth of other colonies. A better understanding of the molecules involved in these interactions could lead to advancements in health and medicine. Imaging mass spectrometry (IMS) applied to co-cultured microbial communities aims to capture the spatial characteristics of the colonies’ molecular fingerprints. These data are high-dimensional and require computational analysis methods to interpret. Results: Here, we present a dictionary learning method that deconvolves spectra of different molecules from IMS data. We call this method MOLecular Dictionary Learning (MOLDL). Unlike standard dictionary learning methods which assume Gaussian-distributed data, our method uses the Poisson distribution to capture the count nature of the mass spectrometry data. Also, our method incorporates universally applicable information on common ion types of molecules in MALDI mass spectrometry. This greatly reduces model parameterization and increases deconvolution accuracy by eliminating spurious solutions. Moreover, our method leverages the spatial nature of IMS data by assuming that nearby locations share similar abundances, thus avoiding overfitting to noise. Tests on simulated datasets show that this method has good performance in recovering molecule dictionaries. We also tested our method on real data measured on a microbial community composed of two species. We confirmed through follow-up validation experiments that our method recovered true and complete signatures of molecules. These results indicate that our method can discover molecules in IMS data reliably, and hence can help advance the study of interaction of microbial colonies. Availability and implementation: The code used in this paper is available at: https://github.com/frizfealer/IMS_project. Contact: vjojic@cs.unc.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:26072476

  16. Sensor-Based Vibration Signal Feature Extraction Using an Improved Composite Dictionary Matching Pursuit Algorithm

    PubMed Central

    Cui, Lingli; Wu, Na; Wang, Wenjing; Kang, Chenhui

    2014-01-01

    This paper presents a new method for a composite dictionary matching pursuit algorithm, which is applied to vibration sensor signal feature extraction and fault diagnosis of a gearbox. Three advantages are highlighted in the new method. First, the composite dictionary in the algorithm has been changed from multi-atom matching to single-atom matching. Compared to non-composite dictionary single-atom matching, the original composite dictionary multi-atom matching pursuit (CD-MaMP) algorithm can achieve noise reduction in the reconstruction stage, but it cannot dramatically reduce the computational cost and improve the efficiency in the decomposition stage. Therefore, the optimized composite dictionary single-atom matching algorithm (CD-SaMP) is proposed. Second, the termination condition of iteration based on the attenuation coefficient is put forward to improve the sparsity and efficiency of the algorithm, which adjusts the parameters of the termination condition constantly in the process of decomposition to avoid noise. Third, composite dictionaries are enriched with the modulation dictionary, which is one of the important structural characteristics of gear fault signals. Meanwhile, the termination condition of iteration settings, sub-feature dictionary selections and operation efficiency between CD-MaMP and CD-SaMP are discussed, aiming at gear simulation vibration signals with noise. The simulation sensor-based vibration signal results show that the termination condition of iteration based on the attenuation coefficient enhances decomposition sparsity greatly and achieves a good effect of noise reduction. Furthermore, the modulation dictionary achieves a better matching effect compared to the Fourier dictionary, and CD-SaMP has a great advantage of sparsity and efficiency compared with the CD-MaMP. The sensor-based vibration signals measured from practical engineering gearbox analyses have further shown that the CD-SaMP decomposition and reconstruction algorithm is feasible and effective. PMID:25207870

  17. Sensor-based vibration signal feature extraction using an improved composite dictionary matching pursuit algorithm.

    PubMed

    Cui, Lingli; Wu, Na; Wang, Wenjing; Kang, Chenhui

    2014-09-09

    This paper presents a new method for a composite dictionary matching pursuit algorithm, which is applied to vibration sensor signal feature extraction and fault diagnosis of a gearbox. Three advantages are highlighted in the new method. First, the composite dictionary in the algorithm has been changed from multi-atom matching to single-atom matching. Compared to non-composite dictionary single-atom matching, the original composite dictionary multi-atom matching pursuit (CD-MaMP) algorithm can achieve noise reduction in the reconstruction stage, but it cannot dramatically reduce the computational cost and improve the efficiency in the decomposition stage. Therefore, the optimized composite dictionary single-atom matching algorithm (CD-SaMP) is proposed. Second, the termination condition of iteration based on the attenuation coefficient is put forward to improve the sparsity and efficiency of the algorithm, which adjusts the parameters of the termination condition constantly in the process of decomposition to avoid noise. Third, composite dictionaries are enriched with the modulation dictionary, which is one of the important structural characteristics of gear fault signals. Meanwhile, the termination condition of iteration settings, sub-feature dictionary selections and operation efficiency between CD-MaMP and CD-SaMP are discussed, aiming at gear simulation vibration signals with noise. The simulation sensor-based vibration signal results show that the termination condition of iteration based on the attenuation coefficient enhances decomposition sparsity greatly and achieves a good effect of noise reduction. Furthermore, the modulation dictionary achieves a better matching effect compared to the Fourier dictionary, and CD-SaMP has a great advantage of sparsity and efficiency compared with the CD-MaMP. The sensor-based vibration signals measured from practical engineering gearbox analyses have further shown that the CD-SaMP decomposition and reconstruction algorithm is feasible and effective.

  18. Using Bilingual Dictionaries.

    ERIC Educational Resources Information Center

    Thompson, Geoff

    1987-01-01

    Monolingual dictionaries have serious disadvantages in many language teaching situations; bilingual dictionaries are potentially more efficient and more motivating sources of information for language learners. (Author/CB)

  19. Patient-Specific Seizure Detection in Long-Term EEG Using Signal-Derived Empirical Mode Decomposition (EMD)-based Dictionary Approach.

    PubMed

    Kaleem, Muhammad; Gurve, Dharmendra; Guergachi, Aziz; Krishnan, Sridhar

    2018-06-25

    The objective of the work described in this paper is development of a computationally efficient methodology for patient-specific automatic seizure detection in long-term multi-channel EEG recordings. Approach: A novel patient-specific seizure detection approach based on signal-derived Empirical Mode Decomposition (EMD)-based dictionary approach is proposed. For this purpose, we use an empirical framework for EMD-based dictionary creation and learning, inspired by traditional dictionary learning methods, in which the EMD-based dictionary is learned from the multi-channel EEG data being analyzed for automatic seizure detection. We present the algorithm for dictionary creation and learning, whose purpose is to learn dictionaries with a small number of atoms. Using training signals belonging to seizure and non-seizure classes, an initial dictionary, termed as the raw dictionary, is formed. The atoms of the raw dictionary are composed of intrinsic mode functions obtained after decomposition of the training signals using the empirical mode decomposition algorithm. The raw dictionary is then trained using a learning algorithm, resulting in a substantial decrease in the number of atoms in the trained dictionary. The trained dictionary is then used for automatic seizure detection, such that coefficients of orthogonal projections of test signals against the trained dictionary form the features used for classification of test signals into seizure and non-seizure classes. Thus no hand-engineered features have to be extracted from the data as in traditional seizure detection approaches. Main results: The performance of the proposed approach is validated using the CHB-MIT benchmark database, and averaged accuracy, sensitivity and specificity values of 92.9%, 94.3% and 91.5%, respectively, are obtained using support vector machine classifier and five-fold cross-validation method. These results are compared with other approaches using the same database, and the suitability of the approach for seizure detection in long-term multi-channel EEG recordings is discussed. Significance: The proposed approach describes a computationally efficient method for automatic seizure detection in long-term multi-channel EEG recordings. The method does not rely on hand-engineered features, as are required in traditional approaches. Furthermore, the approach is suitable for scenarios where the dictionary once formed and trained can be used for automatic seizure detection of newly recorded data, making the approach suitable for long-term multi-channel EEG recordings. © 2018 IOP Publishing Ltd.

  20. Stemming Malay Text and Its Application in Automatic Text Categorization

    NASA Astrophysics Data System (ADS)

    Yasukawa, Michiko; Lim, Hui Tian; Yokoo, Hidetoshi

    In Malay language, there are no conjugations and declensions and affixes have important grammatical functions. In Malay, the same word may function as a noun, an adjective, an adverb, or, a verb, depending on its position in the sentence. Although extensively simple root words are used in informal conversations, it is essential to use the precise words in formal speech or written texts. In Malay, to make sentences clear, derivative words are used. Derivation is achieved mainly by the use of affixes. There are approximately a hundred possible derivative forms of a root word in written language of the educated Malay. Therefore, the composition of Malay words may be complicated. Although there are several types of stemming algorithms available for text processing in English and some other languages, they cannot be used to overcome the difficulties in Malay word stemming. Stemming is the process of reducing various words to their root forms in order to improve the effectiveness of text processing in information systems. It is essential to avoid both over-stemming and under-stemming errors. We have developed a new Malay stemmer (stemming algorithm) for removing inflectional and derivational affixes. Our stemmer uses a set of affix rules and two types of dictionaries: a root-word dictionary and a derivative-word dictionary. The use of set of rules is aimed at reducing the occurrence of under-stemming errors, while that of the dictionaries is believed to reduce the occurrence of over-stemming errors. We performed an experiment to evaluate the application of our stemmer in text mining software. For the experiment, text data used were actual web pages collected from the World Wide Web to demonstrate the effectiveness of our Malay stemming algorithm. The experimental results showed that our stemmer can effectively increase the precision of the extracted Boolean expressions for text categorization.

  1. A Participatory Research Approach to develop an Arabic Symbol Dictionary.

    PubMed

    Draffan, E A; Kadous, Amatullah; Idris, Amal; Banes, David; Zeinoun, Nadine; Wald, Mike; Halabi, Nawar

    2015-01-01

    The purpose of the Arabic Symbol Dictionary research discussed in this paper, is to provide a resource of culturally, environmentally and linguistically suitable symbols to aid communication and literacy skills. A participatory approach with the use of online social media and a bespoke symbol management system has been established to enhance the process of matching a user based Arabic and English core vocabulary with appropriate imagery. Participants including AAC users, their families, carers, teachers and therapists who have been involved in the research from the outset, collating the vocabularies, debating cultural nuances for symbols and critiquing the design of technologies for selection procedures. The positive reaction of those who have voted on the symbols with requests for early use have justified the iterative nature of the methodologies used for this part of the project. However, constant re-evaluation will be necessary and in depth analysis of all the data received has yet to be completed.

  2. IAA Space Terminological Multilingual Data Bank Towards an On- Line Dictionary with Definitions in French and in English

    NASA Astrophysics Data System (ADS)

    Bensaid, R.

    2002-01-01

    It has been emphasized in previous papers that the bilingual "basic list" of the IAA multilingual terminological data bank (MTDB) needed improvement before beginning works on definitions. In this communication, in a first part, we report, on the works (corrections and additions) done to improve the scope of the "basic list" . These works have yet to be done by coordinators for the others twelve languages concerned by the IAA MTBD. In a second part, according to the decision of the IAA MTDB committee to complete the MTDB with definitions in French and in English, we describe the methodology adopted and the problems encountered to elaborate a mock-up of a space dictionary, including in a first step definitions in English and in French, of the English terms and expressions beginning by the letter "A" in the basic list.

  3. Grammar Coding in the "Oxford Advanced Learner's Dictionary of Current English."

    ERIC Educational Resources Information Center

    Wekker, Herman

    1992-01-01

    Focuses on the revised system of grammar coding for verbs in the fourth edition of the "Oxford Advanced Learner's Dictionary of Current English" (OALD4), comparing it with two other similar dictionaries. It is shown that the OALD4 is found to be more favorable on many criteria than the other comparable dictionaries. (16 references) (VWL)

  4. A Study on the Use of Mobile Dictionaries in Vocabulary Teaching

    ERIC Educational Resources Information Center

    Aslan, Erdinç

    2016-01-01

    In recent years, rapid developments in technology have placed books and notebooks into the mobile phones and tablets and also the dictionaries into these small boxes. Giant dictionaries, which we once barely managed to carry, have been replaced by mobile dictionaries through which we can reach any words we want with only few touches. Mobile…

  5. Letters to a Dictionary: Competing Views of Language in the Reception of "Webster's Third New International Dictionary"

    ERIC Educational Resources Information Center

    Bello, Anne Pence

    2013-01-01

    The publication of "Webster's Third New International Dictionary" in September 1961 set off a national controversy about dictionaries and language that ultimately included issues related to linguistics and English education. The negative reviews published in the press about the "Third" have shaped beliefs about the nature of…

  6. The Efficacy of Dictionary Use while Reading for Learning New Words

    ERIC Educational Resources Information Center

    Hamilton, Harley

    2012-01-01

    The researcher investigated the use of three types of dictionaries while reading by high school students with severe to profound hearing loss. The objective of the study was to determine the effectiveness of each type of dictionary for acquiring the meanings of unknown vocabulary in text. The three types of dictionaries were (a) an online…

  7. A Selected Bibliography of Dictionaries. General Information Series, No. 9. Indochinese Refugee Education Guides. Revised.

    ERIC Educational Resources Information Center

    Center for Applied Linguistics, Arlington, VA.

    The purpose of this bulletin is to provide the American teacher or sponsor with information on the use, limitations and availability of dictionaries that can be used by Indochinese refugees. The introductory material contains descriptions of both monolingual and bilingual dictionaries, a discussion of the inadequacies of bilingual dictionaries in…

  8. Dictionaries Can Help Writing--If Students Know How To Use Them.

    ERIC Educational Resources Information Center

    Jacobs, George M.

    A study investigated whether instruction in how to use a dictionary led to improved second language performance and greater dictionary use among English majors (N=54) in a reading and writing course at a Thai university. One of three participating classes was instructed in the use of a monolingual learner's dictionary. A passage correction test…

  9. Dictionary-Based Tensor Canonical Polyadic Decomposition

    NASA Astrophysics Data System (ADS)

    Cohen, Jeremy Emile; Gillis, Nicolas

    2018-04-01

    To ensure interpretability of extracted sources in tensor decomposition, we introduce in this paper a dictionary-based tensor canonical polyadic decomposition which enforces one factor to belong exactly to a known dictionary. A new formulation of sparse coding is proposed which enables high dimensional tensors dictionary-based canonical polyadic decomposition. The benefits of using a dictionary in tensor decomposition models are explored both in terms of parameter identifiability and estimation accuracy. Performances of the proposed algorithms are evaluated on the decomposition of simulated data and the unmixing of hyperspectral images.

  10. Translation lexicon acquisition from bilingual dictionaries

    NASA Astrophysics Data System (ADS)

    Doermann, David S.; Ma, Huanfeng; Karagol-Ayan, Burcu; Oard, Douglas W.

    2001-12-01

    Bilingual dictionaries hold great potential as a source of lexical resources for training automated systems for optical character recognition, machine translation and cross-language information retrieval. In this work we describe a system for extracting term lexicons from printed copies of bilingual dictionaries. We describe our approach to page and definition segmentation and entry parsing. We have used the approach to parse a number of dictionaries and demonstrate the results for retrieval using a French-English Dictionary to generate a translation lexicon and a corpus of English queries applied to French documents to evaluation cross-language IR.

  11. Data dictionaries in information systems - Standards, usage , and application

    NASA Technical Reports Server (NTRS)

    Johnson, Margaret

    1990-01-01

    An overview of data dictionary systems and the role of standardization in the interchange of data dictionaries is presented. The development of the data dictionary for the Planetary Data System is cited as an example. The data element dictionary (DED), which is the repository of the definitions of the vocabulary utilized in an information system, is an important part of this service. A DED provides the definitions of the fields of the data set as well as the data elements of the catalog system. Finally, international efforts such as the Consultative Committee on Space Data Systems and other committees set up to provide standard recommendations on the usage and structure of data dictionaries in the international space science community are discussed.

  12. Orthogonal Procrustes Analysis for Dictionary Learning in Sparse Linear Representation

    PubMed Central

    Grossi, Giuliano; Lin, Jianyi

    2017-01-01

    In the sparse representation model, the design of overcomplete dictionaries plays a key role for the effectiveness and applicability in different domains. Recent research has produced several dictionary learning approaches, being proven that dictionaries learnt by data examples significantly outperform structured ones, e.g. wavelet transforms. In this context, learning consists in adapting the dictionary atoms to a set of training signals in order to promote a sparse representation that minimizes the reconstruction error. Finding the best fitting dictionary remains a very difficult task, leaving the question still open. A well-established heuristic method for tackling this problem is an iterative alternating scheme, adopted for instance in the well-known K-SVD algorithm. Essentially, it consists in repeating two stages; the former promotes sparse coding of the training set and the latter adapts the dictionary to reduce the error. In this paper we present R-SVD, a new method that, while maintaining the alternating scheme, adopts the Orthogonal Procrustes analysis to update the dictionary atoms suitably arranged into groups. Comparative experiments on synthetic data prove the effectiveness of R-SVD with respect to well known dictionary learning algorithms such as K-SVD, ILS-DLA and the online method OSDL. Moreover, experiments on natural data such as ECG compression, EEG sparse representation, and image modeling confirm R-SVD’s robustness and wide applicability. PMID:28103283

  13. U.S.-MEXICO BORDER PROGRAM ARIZONA BORDER STUDY--STANDARD OPERATING PROCEDURE FOR THE GENERATION AND OPERATION OF DATA DICTIONARIES (UA-D-4.0)

    EPA Science Inventory

    The purpose of this SOP is to provide a standard method for the writing of data dictionaries. This procedure applies to the dictionaries used during the Arizona NHEXAS project and the Border study. Keywords: guidelines; data dictionaries.

    The U.S.-Mexico Border Program is spon...

  14. Multimodal Task-Driven Dictionary Learning for Image Classification

    DTIC Science & Technology

    2015-12-18

    1 Multimodal Task-Driven Dictionary Learning for Image Classification Soheil Bahrampour, Student Member, IEEE, Nasser M. Nasrabadi, Fellow, IEEE...Asok Ray, Fellow, IEEE, and W. Kenneth Jenkins, Life Fellow, IEEE Abstract— Dictionary learning algorithms have been suc- cessfully used for both...reconstructive and discriminative tasks, where an input signal is represented with a sparse linear combination of dictionary atoms. While these methods are

  15. A Study of the Relationship between Type of Dictionary Used and Lexical Proficiency in Writings of Iranian EFL Students

    ERIC Educational Resources Information Center

    Vahdany, Fereidoon; Abdollahzadeh, Milad; Gholami, Shokoufeh; Ghanipoor, Mahmood

    2014-01-01

    This study aimed at investigating the relationship between types of dictionaries used and lexical proficiency in writing. Eighty TOEFL students took part in responding to two Questionnaires collecting information about their dictionary type preferences and habits of dictionary use, along with an interview for further in-depth responses. They were…

  16. English-Chinese Cross-Language IR Using Bilingual Dictionaries

    DTIC Science & Technology

    2006-01-01

    specialized dictionaries together contain about two million entries [6]. 4 Monolingual Experiment The Chinese documents and the Chinese translations of... monolingual performance. The main performance-limiting factor is the limited coverage of the dictionary used in query translation. Some of the key con...English-Chinese Cross-Language IR using Bilingual Dictionaries Aitao Chen , Hailing Jiang , and Fredric Gey School of Information Management

  17. Accelerating the reconstruction of magnetic resonance imaging by three-dimensional dual-dictionary learning using CUDA.

    PubMed

    Jiansen Li; Jianqi Sun; Ying Song; Yanran Xu; Jun Zhao

    2014-01-01

    An effective way to improve the data acquisition speed of magnetic resonance imaging (MRI) is using under-sampled k-space data, and dictionary learning method can be used to maintain the reconstruction quality. Three-dimensional dictionary trains the atoms in dictionary in the form of blocks, which can utilize the spatial correlation among slices. Dual-dictionary learning method includes a low-resolution dictionary and a high-resolution dictionary, for sparse coding and image updating respectively. However, the amount of data is huge for three-dimensional reconstruction, especially when the number of slices is large. Thus, the procedure is time-consuming. In this paper, we first utilize the NVIDIA Corporation's compute unified device architecture (CUDA) programming model to design the parallel algorithms on graphics processing unit (GPU) to accelerate the reconstruction procedure. The main optimizations operate in the dictionary learning algorithm and the image updating part, such as the orthogonal matching pursuit (OMP) algorithm and the k-singular value decomposition (K-SVD) algorithm. Then we develop another version of CUDA code with algorithmic optimization. Experimental results show that more than 324 times of speedup is achieved compared with the CPU-only codes when the number of MRI slices is 24.

  18. Basis Expansion Approaches for Regularized Sequential Dictionary Learning Algorithms With Enforced Sparsity for fMRI Data Analysis.

    PubMed

    Seghouane, Abd-Krim; Iqbal, Asif

    2017-09-01

    Sequential dictionary learning algorithms have been successfully applied to functional magnetic resonance imaging (fMRI) data analysis. fMRI data sets are, however, structured data matrices with the notions of temporal smoothness in the column direction. This prior information, which can be converted into a constraint of smoothness on the learned dictionary atoms, has seldomly been included in classical dictionary learning algorithms when applied to fMRI data analysis. In this paper, we tackle this problem by proposing two new sequential dictionary learning algorithms dedicated to fMRI data analysis by accounting for this prior information. These algorithms differ from the existing ones in their dictionary update stage. The steps of this stage are derived as a variant of the power method for computing the SVD. The proposed algorithms generate regularized dictionary atoms via the solution of a left regularized rank-one matrix approximation problem where temporal smoothness is enforced via regularization through basis expansion and sparse basis expansion in the dictionary update stage. Applications on synthetic data experiments and real fMRI data sets illustrating the performance of the proposed algorithms are provided.

  19. Speech and Language and Language Translation (SALT)

    DTIC Science & Technology

    2012-12-01

    Resources are classified as: Parallel Text Dictionaries Monolingual Text Other Dictionaries are further classified as: Text: can download entire...not clear how many are translated http://www.redsea-online.com/modules.php?name= dictionary Monolingual Text Monolingual Text; An Crubadan web...attached to a following word. A program could be written to detach the character د from unknown words, when the remaining word matches a dictionary

  20. Automatic vs. manual curation of a multi-source chemical dictionary: the impact on text mining.

    PubMed

    Hettne, Kristina M; Williams, Antony J; van Mulligen, Erik M; Kleinjans, Jos; Tkachenko, Valery; Kors, Jan A

    2010-03-23

    Previously, we developed a combined dictionary dubbed Chemlist for the identification of small molecules and drugs in text based on a number of publicly available databases and tested it on an annotated corpus. To achieve an acceptable recall and precision we used a number of automatic and semi-automatic processing steps together with disambiguation rules. However, it remained to be investigated which impact an extensive manual curation of a multi-source chemical dictionary would have on chemical term identification in text. ChemSpider is a chemical database that has undergone extensive manual curation aimed at establishing valid chemical name-to-structure relationships. We acquired the component of ChemSpider containing only manually curated names and synonyms. Rule-based term filtering, semi-automatic manual curation, and disambiguation rules were applied. We tested the dictionary from ChemSpider on an annotated corpus and compared the results with those for the Chemlist dictionary. The ChemSpider dictionary of ca. 80 k names was only a 1/3 to a 1/4 the size of Chemlist at around 300 k. The ChemSpider dictionary had a precision of 0.43 and a recall of 0.19 before the application of filtering and disambiguation and a precision of 0.87 and a recall of 0.19 after filtering and disambiguation. The Chemlist dictionary had a precision of 0.20 and a recall of 0.47 before the application of filtering and disambiguation and a precision of 0.67 and a recall of 0.40 after filtering and disambiguation. We conclude the following: (1) The ChemSpider dictionary achieved the best precision but the Chemlist dictionary had a higher recall and the best F-score; (2) Rule-based filtering and disambiguation is necessary to achieve a high precision for both the automatically generated and the manually curated dictionary. ChemSpider is available as a web service at http://www.chemspider.com/ and the Chemlist dictionary is freely available as an XML file in Simple Knowledge Organization System format on the web at http://www.biosemantics.org/chemlist.

  1. Automatic vs. manual curation of a multi-source chemical dictionary: the impact on text mining

    PubMed Central

    2010-01-01

    Background Previously, we developed a combined dictionary dubbed Chemlist for the identification of small molecules and drugs in text based on a number of publicly available databases and tested it on an annotated corpus. To achieve an acceptable recall and precision we used a number of automatic and semi-automatic processing steps together with disambiguation rules. However, it remained to be investigated which impact an extensive manual curation of a multi-source chemical dictionary would have on chemical term identification in text. ChemSpider is a chemical database that has undergone extensive manual curation aimed at establishing valid chemical name-to-structure relationships. Results We acquired the component of ChemSpider containing only manually curated names and synonyms. Rule-based term filtering, semi-automatic manual curation, and disambiguation rules were applied. We tested the dictionary from ChemSpider on an annotated corpus and compared the results with those for the Chemlist dictionary. The ChemSpider dictionary of ca. 80 k names was only a 1/3 to a 1/4 the size of Chemlist at around 300 k. The ChemSpider dictionary had a precision of 0.43 and a recall of 0.19 before the application of filtering and disambiguation and a precision of 0.87 and a recall of 0.19 after filtering and disambiguation. The Chemlist dictionary had a precision of 0.20 and a recall of 0.47 before the application of filtering and disambiguation and a precision of 0.67 and a recall of 0.40 after filtering and disambiguation. Conclusions We conclude the following: (1) The ChemSpider dictionary achieved the best precision but the Chemlist dictionary had a higher recall and the best F-score; (2) Rule-based filtering and disambiguation is necessary to achieve a high precision for both the automatically generated and the manually curated dictionary. ChemSpider is available as a web service at http://www.chemspider.com/ and the Chemlist dictionary is freely available as an XML file in Simple Knowledge Organization System format on the web at http://www.biosemantics.org/chemlist. PMID:20331846

  2. Relaxations to Sparse Optimization Problems and Applications

    NASA Astrophysics Data System (ADS)

    Skau, Erik West

    Parsimony is a fundamental property that is applied to many characteristics in a variety of fields. Of particular interest are optimization problems that apply rank, dimensionality, or support in a parsimonious manner. In this thesis we study some optimization problems and their relaxations, and focus on properties and qualities of the solutions of these problems. The Gramian tensor decomposition problem attempts to decompose a symmetric tensor as a sum of rank one tensors.We approach the Gramian tensor decomposition problem with a relaxation to a semidefinite program. We study conditions which ensure that the solution of the relaxed semidefinite problem gives the minimal Gramian rank decomposition. Sparse representations with learned dictionaries are one of the leading image modeling techniques for image restoration. When learning these dictionaries from a set of training images, the sparsity parameter of the dictionary learning algorithm strongly influences the content of the dictionary atoms.We describe geometrically the content of trained dictionaries and how it changes with the sparsity parameter.We use statistical analysis to characterize how the different content is used in sparse representations. Finally, a method to control the structure of the dictionaries is demonstrated, allowing us to learn a dictionary which can later be tailored for specific applications. Variations of dictionary learning can be broadly applied to a variety of applications.We explore a pansharpening problem with a triple factorization variant of coupled dictionary learning. Another application of dictionary learning is computer vision. Computer vision relies heavily on object detection, which we explore with a hierarchical convolutional dictionary learning model. Data fusion of disparate modalities is a growing topic of interest.We do a case study to demonstrate the benefit of using social media data with satellite imagery to estimate hazard extents. In this case study analysis we apply a maximum entropy model, guided by the social media data, to estimate the flooded regions during a 2013 flood in Boulder, CO and show that the results are comparable to those obtained using expert information.

  3. Developing a distributed data dictionary service

    NASA Technical Reports Server (NTRS)

    U'Ren, J.

    2000-01-01

    This paper will explore the use of the Lightweight Directory Access Protocol (LDAP) using the ISO 11179 Data Dictionary Schema as a mechanism for standardizing the structure and communication links between data dictionaries.

  4. Multiple Sparse Representations Classification

    PubMed Central

    Plenge, Esben; Klein, Stefan S.; Niessen, Wiro J.; Meijering, Erik

    2015-01-01

    Sparse representations classification (SRC) is a powerful technique for pixelwise classification of images and it is increasingly being used for a wide variety of image analysis tasks. The method uses sparse representation and learned redundant dictionaries to classify image pixels. In this empirical study we propose to further leverage the redundancy of the learned dictionaries to achieve a more accurate classifier. In conventional SRC, each image pixel is associated with a small patch surrounding it. Using these patches, a dictionary is trained for each class in a supervised fashion. Commonly, redundant/overcomplete dictionaries are trained and image patches are sparsely represented by a linear combination of only a few of the dictionary elements. Given a set of trained dictionaries, a new patch is sparse coded using each of them, and subsequently assigned to the class whose dictionary yields the minimum residual energy. We propose a generalization of this scheme. The method, which we call multiple sparse representations classification (mSRC), is based on the observation that an overcomplete, class specific dictionary is capable of generating multiple accurate and independent estimates of a patch belonging to the class. So instead of finding a single sparse representation of a patch for each dictionary, we find multiple, and the corresponding residual energies provides an enhanced statistic which is used to improve classification. We demonstrate the efficacy of mSRC for three example applications: pixelwise classification of texture images, lumen segmentation in carotid artery magnetic resonance imaging (MRI), and bifurcation point detection in carotid artery MRI. We compare our method with conventional SRC, K-nearest neighbor, and support vector machine classifiers. The results show that mSRC outperforms SRC and the other reference methods. In addition, we present an extensive evaluation of the effect of the main mSRC parameters: patch size, dictionary size, and sparsity level. PMID:26177106

  5. A dictionary server for supplying context sensitive medical knowledge.

    PubMed

    Ruan, W; Bürkle, T; Dudeck, J

    2000-01-01

    The Giessen Data Dictionary Server (GDDS), developed at Giessen University Hospital, integrates clinical systems with on-line, context sensitive medical knowledge to help with making medical decisions. By "context" we mean the clinical information that is being presented at the moment the information need is occurring. The dictionary server makes use of a semantic network supported by a medical data dictionary to link terms from clinical applications to their proper information sources. It has been designed to analyze the network structure itself instead of knowing the layout of the semantic net in advance. This enables us to map appropriate information sources to various clinical applications, such as nursing documentation, drug prescription and cancer follow up systems. This paper describes the function of the dictionary server and shows how the knowledge stored in the semantic network is used in the dictionary service.

  6. MO-G-17A-05: PET Image Deblurring Using Adaptive Dictionary Learning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Valiollahzadeh, S; Clark, J; Mawlawi, O

    2014-06-15

    Purpose: The aim of this work is to deblur PET images while suppressing Poisson noise effects using adaptive dictionary learning (DL) techniques. Methods: The model that relates a blurred and noisy PET image to the desired image is described as a linear transform y=Hm+n where m is the desired image, H is a blur kernel, n is Poisson noise and y is the blurred image. The approach we follow to recover m involves the sparse representation of y over a learned dictionary, since the image has lots of repeated patterns, edges, textures and smooth regions. The recovery is based onmore » an optimization of a cost function having four major terms: adaptive dictionary learning term, sparsity term, regularization term, and MLEM Poisson noise estimation term. The optimization is solved by a variable splitting method that introduces additional variables. We simulated a 128×128 Hoffman brain PET image (baseline) with varying kernel types and sizes (Gaussian 9×9, σ=5.4mm; Uniform 5×5, σ=2.9mm) with additive Poisson noise (Blurred). Image recovery was performed once when the kernel type was included in the model optimization and once with the model blinded to kernel type. The recovered image was compared to the baseline as well as another recovery algorithm PIDSPLIT+ (Setzer et. al.) by calculating PSNR (Peak SNR) and normalized average differences in pixel intensities (NADPI) of line profiles across the images. Results: For known kernel types, the PSNR of the Gaussian (Uniform) was 28.73 (25.1) and 25.18 (23.4) for DL and PIDSPLIT+ respectively. For blinded deblurring the PSNRs were 25.32 and 22.86 for DL and PIDSPLIT+ respectively. NADPI between baseline and DL, and baseline and blurred for the Gaussian kernel was 2.5 and 10.8 respectively. Conclusion: PET image deblurring using dictionary learning seems to be a good approach to restore image resolution in presence of Poisson noise. GE Health Care.« less

  7. Sparsity and Nullity: Paradigm for Analysis Dictionary Learning

    DTIC Science & Technology

    2016-08-09

    16. SECURITY CLASSIFICATION OF: Sparse models in dictionary learning have been successfully applied in a wide variety of machine learning and...we investigate the relation between the SNS problem and the analysis dictionary learning problem, and show that the SNS problem plays a central role...and may be utilized to solve dictionary learning problems. 1. REPORT DATE (DD-MM-YYYY) 4. TITLE AND SUBTITLE 13. SUPPLEMENTARY NOTES 12

  8. Parsing and Tagging of Bilingual Dictionary

    DTIC Science & Technology

    2003-09-01

    LAMP-TR-106 CAR-TR-991 CS-TR-4529 UMIACS-TR-2003-97 PARSING ANS TAGGING OF BILINGUAL DICTIONARY Huanfeng Ma1,2, Burcu Karagol-Ayan1,2, David... dictionaries hold great potential as a source of lexical resources for training and testing automated systems for optical character recognition, machine...translation, and cross-language information retrieval. In this paper, we describe a system for extracting term lexicons from printed bilingual dictionaries

  9. Readers' opinions of romantic poetry are consistent with emotional measures based on the Dictionary of Affect in Language.

    PubMed

    Whissell, Cynthia

    2003-06-01

    A principal components analysis of 68 volunteers' subjective ratings of 20 excerpts of Romantic poetry and of Dictionary of Affect scores for the same excerpts produced four components representing Pleasantness, Activation, Romanticism, and Nature. Dictionary measures and subjective ratings of the same constructs loaded on the same factor. Results are interpreted as providing construct validity for the Dictionary of Affect.

  10. Fast dictionary-based reconstruction for diffusion spectrum imaging.

    PubMed

    Bilgic, Berkin; Chatnuntawech, Itthi; Setsompop, Kawin; Cauley, Stephen F; Yendiki, Anastasia; Wald, Lawrence L; Adalsteinsson, Elfar

    2013-11-01

    Diffusion spectrum imaging reveals detailed local diffusion properties at the expense of substantially long imaging times. It is possible to accelerate acquisition by undersampling in q-space, followed by image reconstruction that exploits prior knowledge on the diffusion probability density functions (pdfs). Previously proposed methods impose this prior in the form of sparsity under wavelet and total variation transforms, or under adaptive dictionaries that are trained on example datasets to maximize the sparsity of the representation. These compressed sensing (CS) methods require full-brain processing times on the order of hours using MATLAB running on a workstation. This work presents two dictionary-based reconstruction techniques that use analytical solutions, and are two orders of magnitude faster than the previously proposed dictionary-based CS approach. The first method generates a dictionary from the training data using principal component analysis (PCA), and performs the reconstruction in the PCA space. The second proposed method applies reconstruction using pseudoinverse with Tikhonov regularization with respect to a dictionary. This dictionary can either be obtained using the K-SVD algorithm, or it can simply be the training dataset of pdfs without any training. All of the proposed methods achieve reconstruction times on the order of seconds per imaging slice, and have reconstruction quality comparable to that of dictionary-based CS algorithm.

  11. Weakly supervised visual dictionary learning by harnessing image attributes.

    PubMed

    Gao, Yue; Ji, Rongrong; Liu, Wei; Dai, Qionghai; Hua, Gang

    2014-12-01

    Bag-of-features (BoFs) representation has been extensively applied to deal with various computer vision applications. To extract discriminative and descriptive BoF, one important step is to learn a good dictionary to minimize the quantization loss between local features and codewords. While most existing visual dictionary learning approaches are engaged with unsupervised feature quantization, the latest trend has turned to supervised learning by harnessing the semantic labels of images or regions. However, such labels are typically too expensive to acquire, which restricts the scalability of supervised dictionary learning approaches. In this paper, we propose to leverage image attributes to weakly supervise the dictionary learning procedure without requiring any actual labels. As a key contribution, our approach establishes a generative hidden Markov random field (HMRF), which models the quantized codewords as the observed states and the image attributes as the hidden states, respectively. Dictionary learning is then performed by supervised grouping the observed states, where the supervised information is stemmed from the hidden states of the HMRF. In such a way, the proposed dictionary learning approach incorporates the image attributes to learn a semantic-preserving BoF representation without any genuine supervision. Experiments in large-scale image retrieval and classification tasks corroborate that our approach significantly outperforms the state-of-the-art unsupervised dictionary learning approaches.

  12. Password-only authenticated three-party key exchange proven secure against insider dictionary attacks.

    PubMed

    Nam, Junghyun; Choo, Kim-Kwang Raymond; Paik, Juryon; Won, Dongho

    2014-01-01

    While a number of protocols for password-only authenticated key exchange (PAKE) in the 3-party setting have been proposed, it still remains a challenging task to prove the security of a 3-party PAKE protocol against insider dictionary attacks. To the best of our knowledge, there is no 3-party PAKE protocol that carries a formal proof, or even definition, of security against insider dictionary attacks. In this paper, we present the first 3-party PAKE protocol proven secure against both online and offline dictionary attacks as well as insider and outsider dictionary attacks. Our construct can be viewed as a protocol compiler that transforms any 2-party PAKE protocol into a 3-party PAKE protocol with 2 additional rounds of communication. We also present a simple and intuitive approach of formally modelling dictionary attacks in the password-only 3-party setting, which significantly reduces the complexity of proving the security of 3-party PAKE protocols against dictionary attacks. In addition, we investigate the security of the well-known 3-party PAKE protocol, called GPAKE, due to Abdalla et al. (2005, 2006), and demonstrate that the security of GPAKE against online dictionary attacks depends heavily on the composition of its two building blocks, namely a 2-party PAKE protocol and a 3-party key distribution protocol.

  13. Adaptive Greedy Dictionary Selection for Web Media Summarization.

    PubMed

    Cong, Yang; Liu, Ji; Sun, Gan; You, Quanzeng; Li, Yuncheng; Luo, Jiebo

    2017-01-01

    Initializing an effective dictionary is an indispensable step for sparse representation. In this paper, we focus on the dictionary selection problem with the objective to select a compact subset of basis from original training data instead of learning a new dictionary matrix as dictionary learning models do. We first design a new dictionary selection model via l 2,0 norm. For model optimization, we propose two methods: one is the standard forward-backward greedy algorithm, which is not suitable for large-scale problems; the other is based on the gradient cues at each forward iteration and speeds up the process dramatically. In comparison with the state-of-the-art dictionary selection models, our model is not only more effective and efficient, but also can control the sparsity. To evaluate the performance of our new model, we select two practical web media summarization problems: 1) we build a new data set consisting of around 500 users, 3000 albums, and 1 million images, and achieve effective assisted albuming based on our model and 2) by formulating the video summarization problem as a dictionary selection issue, we employ our model to extract keyframes from a video sequence in a more flexible way. Generally, our model outperforms the state-of-the-art methods in both these two tasks.

  14. Multi-level discriminative dictionary learning with application to large scale image classification.

    PubMed

    Shen, Li; Sun, Gang; Huang, Qingming; Wang, Shuhui; Lin, Zhouchen; Wu, Enhua

    2015-10-01

    The sparse coding technique has shown flexibility and capability in image representation and analysis. It is a powerful tool in many visual applications. Some recent work has shown that incorporating the properties of task (such as discrimination for classification task) into dictionary learning is effective for improving the accuracy. However, the traditional supervised dictionary learning methods suffer from high computation complexity when dealing with large number of categories, making them less satisfactory in large scale applications. In this paper, we propose a novel multi-level discriminative dictionary learning method and apply it to large scale image classification. Our method takes advantage of hierarchical category correlation to encode multi-level discriminative information. Each internal node of the category hierarchy is associated with a discriminative dictionary and a classification model. The dictionaries at different layers are learnt to capture the information of different scales. Moreover, each node at lower layers also inherits the dictionary of its parent, so that the categories at lower layers can be described with multi-scale information. The learning of dictionaries and associated classification models is jointly conducted by minimizing an overall tree loss. The experimental results on challenging data sets demonstrate that our approach achieves excellent accuracy and competitive computation cost compared with other sparse coding methods for large scale image classification.

  15. Fast Dictionary-Based Reconstruction for Diffusion Spectrum Imaging

    PubMed Central

    Bilgic, Berkin; Chatnuntawech, Itthi; Setsompop, Kawin; Cauley, Stephen F.; Yendiki, Anastasia; Wald, Lawrence L.; Adalsteinsson, Elfar

    2015-01-01

    Diffusion Spectrum Imaging (DSI) reveals detailed local diffusion properties at the expense of substantially long imaging times. It is possible to accelerate acquisition by undersampling in q-space, followed by image reconstruction that exploits prior knowledge on the diffusion probability density functions (pdfs). Previously proposed methods impose this prior in the form of sparsity under wavelet and total variation (TV) transforms, or under adaptive dictionaries that are trained on example datasets to maximize the sparsity of the representation. These compressed sensing (CS) methods require full-brain processing times on the order of hours using Matlab running on a workstation. This work presents two dictionary-based reconstruction techniques that use analytical solutions, and are two orders of magnitude faster than the previously proposed dictionary-based CS approach. The first method generates a dictionary from the training data using Principal Component Analysis (PCA), and performs the reconstruction in the PCA space. The second proposed method applies reconstruction using pseudoinverse with Tikhonov regularization with respect to a dictionary. This dictionary can either be obtained using the K-SVD algorithm, or it can simply be the training dataset of pdfs without any training. All of the proposed methods achieve reconstruction times on the order of seconds per imaging slice, and have reconstruction quality comparable to that of dictionary-based CS algorithm. PMID:23846466

  16. Extent and Use of Indigenous Vocabulary in Guatemalan Spanish.

    ERIC Educational Resources Information Center

    Scavnicky, Gary Eugene A.

    This paper examines the actual content and use of Indian vocabulary in standard Guatemalan Spanish, as opposed to the numerous entries found in antiquated dictionaries. Over 600 Indian words were extracted from contemporary Guatemalan literature and Lisandro Sandoval's "Semantica guatemalense." Interviews were arranged with middle and…

  17. Stewardship

    ERIC Educational Resources Information Center

    Canada, Benjamin O.

    2005-01-01

    In this article, the author found himself particularly drawn to a book he received in the mail--"Stewardship: Choosing Service Over Self-Interest" by Peter Block. Although the dictionary definition of steward is "one who manages another's property, finances or other affairs," from his vantage point as the first African-American superintendent in…

  18. Natural-Annotation-based Unsupervised Construction of Korean-Chinese Domain Dictionary

    NASA Astrophysics Data System (ADS)

    Liu, Wuying; Wang, Lin

    2018-03-01

    The large-scale bilingual parallel resource is significant to statistical learning and deep learning in natural language processing. This paper addresses the automatic construction issue of the Korean-Chinese domain dictionary, and presents a novel unsupervised construction method based on the natural annotation in the raw corpus. We firstly extract all Korean-Chinese word pairs from Korean texts according to natural annotations, secondly transform the traditional Chinese characters into the simplified ones, and finally distill out a bilingual domain dictionary after retrieving the simplified Chinese words in an extra Chinese domain dictionary. The experimental results show that our method can automatically build multiple Korean-Chinese domain dictionaries efficiently.

  19. Building a dictionary for genomes: Identification of presumptive regulatory sites by statistical analysis

    PubMed Central

    Bussemaker, Harmen J.; Li, Hao; Siggia, Eric D.

    2000-01-01

    The availability of complete genome sequences and mRNA expression data for all genes creates new opportunities and challenges for identifying DNA sequence motifs that control gene expression. An algorithm, “MobyDick,” is presented that decomposes a set of DNA sequences into the most probable dictionary of motifs or words. This method is applicable to any set of DNA sequences: for example, all upstream regions in a genome or all genes expressed under certain conditions. Identification of words is based on a probabilistic segmentation model in which the significance of longer words is deduced from the frequency of shorter ones of various lengths, eliminating the need for a separate set of reference data to define probabilities. We have built a dictionary with 1,200 words for the 6,000 upstream regulatory regions in the yeast genome; the 500 most significant words (some with as few as 10 copies in all of the upstream regions) match 114 of 443 experimentally determined sites (a significance level of 18 standard deviations). When analyzing all of the genes up-regulated during sporulation as a group, we find many motifs in addition to the few previously identified by analyzing the subclusters individually to the expression subclusters. Applying MobyDick to the genes derepressed when the general repressor Tup1 is deleted, we find known as well as putative binding sites for its regulatory partners. PMID:10944202

  20. The Pocket Dictionary: A Textbook for Spelling.

    ERIC Educational Resources Information Center

    Doggett, Maran

    1982-01-01

    Reports on a productive approach to secondary-school spelling instruction--one that emphasizes how and when to use the dictionary. Describes two of the many class activities that cultivate student use of the dictionary. (RL)

  1. Cheap Words: A Paperback Dictionary Roundup.

    ERIC Educational Resources Information Center

    Kister, Ken

    1979-01-01

    Surveys currently available paperback editions in three classes of dictionaries: collegiate, abridged, and pocket. A general discussion distinguishes among the classes and offers seven consumer tips, followed by an annotated listing of dictionaries now available. (SW)

  2. On A Nonlinear Generalization of Sparse Coding and Dictionary Learning.

    PubMed

    Xie, Yuchen; Ho, Jeffrey; Vemuri, Baba

    2013-01-01

    Existing dictionary learning algorithms are based on the assumption that the data are vectors in an Euclidean vector space ℝ d , and the dictionary is learned from the training data using the vector space structure of ℝ d and its Euclidean L 2 -metric. However, in many applications, features and data often originated from a Riemannian manifold that does not support a global linear (vector space) structure. Furthermore, the extrinsic viewpoint of existing dictionary learning algorithms becomes inappropriate for modeling and incorporating the intrinsic geometry of the manifold that is potentially important and critical to the application. This paper proposes a novel framework for sparse coding and dictionary learning for data on a Riemannian manifold, and it shows that the existing sparse coding and dictionary learning methods can be considered as special (Euclidean) cases of the more general framework proposed here. We show that both the dictionary and sparse coding can be effectively computed for several important classes of Riemannian manifolds, and we validate the proposed method using two well-known classification problems in computer vision and medical imaging analysis.

  3. On A Nonlinear Generalization of Sparse Coding and Dictionary Learning

    PubMed Central

    Xie, Yuchen; Ho, Jeffrey; Vemuri, Baba

    2013-01-01

    Existing dictionary learning algorithms are based on the assumption that the data are vectors in an Euclidean vector space ℝd, and the dictionary is learned from the training data using the vector space structure of ℝd and its Euclidean L2-metric. However, in many applications, features and data often originated from a Riemannian manifold that does not support a global linear (vector space) structure. Furthermore, the extrinsic viewpoint of existing dictionary learning algorithms becomes inappropriate for modeling and incorporating the intrinsic geometry of the manifold that is potentially important and critical to the application. This paper proposes a novel framework for sparse coding and dictionary learning for data on a Riemannian manifold, and it shows that the existing sparse coding and dictionary learning methods can be considered as special (Euclidean) cases of the more general framework proposed here. We show that both the dictionary and sparse coding can be effectively computed for several important classes of Riemannian manifolds, and we validate the proposed method using two well-known classification problems in computer vision and medical imaging analysis. PMID:24129583

  4. An analysis dictionary learning algorithm under a noisy data model with orthogonality constraint.

    PubMed

    Zhang, Ye; Yu, Tenglong; Wang, Wenwu

    2014-01-01

    Two common problems are often encountered in analysis dictionary learning (ADL) algorithms. The first one is that the original clean signals for learning the dictionary are assumed to be known, which otherwise need to be estimated from noisy measurements. This, however, renders a computationally slow optimization process and potentially unreliable estimation (if the noise level is high), as represented by the Analysis K-SVD (AK-SVD) algorithm. The other problem is the trivial solution to the dictionary, for example, the null dictionary matrix that may be given by a dictionary learning algorithm, as discussed in the learning overcomplete sparsifying transform (LOST) algorithm. Here we propose a novel optimization model and an iterative algorithm to learn the analysis dictionary, where we directly employ the observed data to compute the approximate analysis sparse representation of the original signals (leading to a fast optimization procedure) and enforce an orthogonality constraint on the optimization criterion to avoid the trivial solutions. Experiments demonstrate the competitive performance of the proposed algorithm as compared with three baselines, namely, the AK-SVD, LOST, and NAAOLA algorithms.

  5. Land cover classification in multispectral imagery using clustering of sparse approximations over learned feature dictionaries

    DOE PAGES

    Moody, Daniela I.; Brumby, Steven P.; Rowland, Joel C.; ...

    2014-12-09

    We present results from an ongoing effort to extend neuromimetic machine vision algorithms to multispectral data using adaptive signal processing combined with compressive sensing and machine learning techniques. Our goal is to develop a robust classification methodology that will allow for automated discretization of the landscape into distinct units based on attributes such as vegetation, surface hydrological properties, and topographic/geomorphic characteristics. We use a Hebbian learning rule to build spectral-textural dictionaries that are tailored for classification. We learn our dictionaries from millions of overlapping multispectral image patches and then use a pursuit search to generate classification features. Land cover labelsmore » are automatically generated using unsupervised clustering of sparse approximations (CoSA). We demonstrate our method on multispectral WorldView-2 data from a coastal plain ecosystem in Barrow, Alaska. We explore learning from both raw multispectral imagery and normalized band difference indices. We explore a quantitative metric to evaluate the spectral properties of the clusters in order to potentially aid in assigning land cover categories to the cluster labels. In this study, our results suggest CoSA is a promising approach to unsupervised land cover classification in high-resolution satellite imagery.« less

  6. Speckle noise reduction for optical coherence tomography based on adaptive 2D dictionary

    NASA Astrophysics Data System (ADS)

    Lv, Hongli; Fu, Shujun; Zhang, Caiming; Zhai, Lin

    2018-05-01

    As a high-resolution biomedical imaging modality, optical coherence tomography (OCT) is widely used in medical sciences. However, OCT images often suffer from speckle noise, which can mask some important image information, and thus reduce the accuracy of clinical diagnosis. Taking full advantage of nonlocal self-similarity and adaptive 2D-dictionary-based sparse representation, in this work, a speckle noise reduction algorithm is proposed for despeckling OCT images. To reduce speckle noise while preserving local image features, similar nonlocal patches are first extracted from the noisy image and put into groups using a gamma- distribution-based block matching method. An adaptive 2D dictionary is then learned for each patch group. Unlike traditional vector-based sparse coding, we express each image patch by the linear combination of a few matrices. This image-to-matrix method can exploit the local correlation between pixels. Since each image patch might belong to several groups, the despeckled OCT image is finally obtained by aggregating all filtered image patches. The experimental results demonstrate the superior performance of the proposed method over other state-of-the-art despeckling methods, in terms of objective metrics and visual inspection.

  7. Rating prediction using textual reviews

    NASA Astrophysics Data System (ADS)

    NithyaKalyani, A.; Ushasukhanya, S.; Nagamalleswari, TYJ; Girija, S.

    2018-04-01

    Information today is present in the form of opinions. Two & a half quintillion bytes are exchanged today in Internet everyday and a large amount consists of people’s speculation and reflection over an issue. It is the need of the hour to be able to mine this information that is presented to us. Sentimental analysis refers to mining of this raw information to make sense. The discipline of opinion mining has seen a lot of encouragement in the past few years augmented by involvement of social media like Instagram, Facebook, and twitter. The hidden message in this web of information is useful in several fields such as marketing, political polls, product review, forecast market movement, Identifying detractor and promoter. In this endeavor, we introduced sentiment rating system for a particular text or paragraph to determine the opinions polarity. Firstly we resolve the searching problem, tokenization, classification, and reliable content identification. Secondly we extract probability for given text or paragraph for both positive & negative sentiment value using naive bayes classifier. At last we use sentiment dictionary (SD), sentiment degree dictionary (SDD) and negation dictionary (ND) for more accuracy. Later we blend all above mentioned factor into given formula to find the rating for the review.

  8. Iterative dictionary construction for compression of large DNA data sets.

    PubMed

    Kuruppu, Shanika; Beresford-Smith, Bryan; Conway, Thomas; Zobel, Justin

    2012-01-01

    Genomic repositories increasingly include individual as well as reference sequences, which tend to share long identical and near-identical strings of nucleotides. However, the sequential processing used by most compression algorithms, and the volumes of data involved, mean that these long-range repetitions are not detected. An order-insensitive, disk-based dictionary construction method can detect this repeated content and use it to compress collections of sequences. We explore a dictionary construction method that improves repeat identification in large DNA data sets. Our adaptation, COMRAD, of an existing disk-based method identifies exact repeated content in collections of sequences with similarities within and across the set of input sequences. COMRAD compresses the data over multiple passes, which is an expensive process, but allows COMRAD to compress large data sets within reasonable time and space. COMRAD allows for random access to individual sequences and subsequences without decompressing the whole data set. COMRAD has no competitor in terms of the size of data sets that it can compress (extending to many hundreds of gigabytes) and, even for smaller data sets, the results are competitive compared to alternatives; as an example, 39 S. cerevisiae genomes compressed to 0.25 bits per base.

  9. Pulmonary emphysema classification based on an improved texton learning model by sparse representation

    NASA Astrophysics Data System (ADS)

    Zhang, Min; Zhou, Xiangrong; Goshima, Satoshi; Chen, Huayue; Muramatsu, Chisako; Hara, Takeshi; Yokoyama, Ryujiro; Kanematsu, Masayuki; Fujita, Hiroshi

    2013-03-01

    In this paper, we present a texture classification method based on texton learned via sparse representation (SR) with new feature histogram maps in the classification of emphysema. First, an overcomplete dictionary of textons is learned via KSVD learning on every class image patches in the training dataset. In this stage, high-pass filter is introduced to exclude patches in smooth area to speed up the dictionary learning process. Second, 3D joint-SR coefficients and intensity histograms of the test images are used for characterizing regions of interest (ROIs) instead of conventional feature histograms constructed from SR coefficients of the test images over the dictionary. Classification is then performed using a classifier with distance as a histogram dissimilarity measure. Four hundreds and seventy annotated ROIs extracted from 14 test subjects, including 6 paraseptal emphysema (PSE) subjects, 5 centrilobular emphysema (CLE) subjects and 3 panlobular emphysema (PLE) subjects, are used to evaluate the effectiveness and robustness of the proposed method. The proposed method is tested on 167 PSE, 240 CLE and 63 PLE ROIs consisting of mild, moderate and severe pulmonary emphysema. The accuracy of the proposed system is around 74%, 88% and 89% for PSE, CLE and PLE, respectively.

  10. Parametric dictionary learning for modeling EAP and ODF in diffusion MRI.

    PubMed

    Merlet, Sylvain; Caruyer, Emmanuel; Deriche, Rachid

    2012-01-01

    In this work, we propose an original and efficient approach to exploit the ability of Compressed Sensing (CS) to recover diffusion MRI (dMRI) signals from a limited number of samples while efficiently recovering important diffusion features such as the ensemble average propagator (EAP) and the orientation distribution function (ODF). Some attempts to sparsely represent the diffusion signal have already been performed. However and contrarly to what has been presented in CS dMRI, in this work we propose and advocate the use of a well adapted learned dictionary and show that it leads to a sparser signal estimation as well as to an efficient reconstruction of very important diffusion features. We first propose to learn and design a sparse and parametric dictionary from a set of training diffusion data. Then, we propose a framework to analytically estimate in closed form two important diffusion features: the EAP and the ODF. Various experiments on synthetic, phantom and human brain data have been carried out and promising results with reduced number of atoms have been obtained on diffusion signal reconstruction, thus illustrating the added value of our method over state-of-the-art SHORE and SPF based approaches.

  11. Sparse representations via learned dictionaries for x-ray angiogram image denoising

    NASA Astrophysics Data System (ADS)

    Shang, Jingfan; Huang, Zhenghua; Li, Qian; Zhang, Tianxu

    2018-03-01

    X-ray angiogram image denoising is always an active research topic in the field of computer vision. In particular, the denoising performance of many existing methods had been greatly improved by the widely use of nonlocal similar patches. However, the only nonlocal self-similar (NSS) patch-based methods can be still be improved and extended. In this paper, we propose an image denoising model based on the sparsity of the NSS patches to obtain high denoising performance and high-quality image. In order to represent the sparsely NSS patches in every location of the image well and solve the image denoising model more efficiently, we obtain dictionaries as a global image prior by the K-SVD algorithm over the processing image; Then the single and effectively alternating directions method of multipliers (ADMM) method is used to solve the image denoising model. The results of widely synthetic experiments demonstrate that, owing to learned dictionaries by K-SVD algorithm, a sparsely augmented lagrangian image denoising (SALID) model, which perform effectively, obtains a state-of-the-art denoising performance and better high-quality images. Moreover, we also give some denoising results of clinical X-ray angiogram images.

  12. Study of CP asymmetry in B^{0}-B[over ¯]^{0} mixing with inclusive dilepton events.

    PubMed

    Lees, J P; Poireau, V; Tisserand, V; Grauges, E; Palano, A; Eigen, G; Stugu, B; Brown, D N; Kerth, L T; Kolomensky, Yu G; Lee, M J; Lynch, G; Koch, H; Schroeder, T; Hearty, C; Mattison, T S; McKenna, J A; So, R Y; Khan, A; Blinov, V E; Buzykaev, A R; Druzhinin, V P; Golubev, V B; Kravchenko, E A; Onuchin, A P; Serednyakov, S I; Skovpen, Yu I; Solodov, E P; Todyshev, K Yu; Lankford, A J; Mandelkern, M; Dey, B; Gary, J W; Long, O; Campagnari, C; Franco Sevilla, M; Hong, T M; Kovalskyi, D; Richman, J D; West, C A; Eisner, A M; Lockman, W S; Panduro Vazquez, W; Schumm, B A; Seiden, A; Chao, D S; Cheng, C H; Echenard, B; Flood, K T; Hitlin, D G; Miyashita, T S; Ongmongkolkul, P; Porter, F C; Röhrken, M; Andreassen, R; Huard, Z; Meadows, B T; Pushpawela, B G; Sokoloff, M D; Sun, L; Bloom, P C; Ford, W T; Gaz, A; Smith, J G; Wagner, S R; Ayad, R; Toki, W H; Spaan, B; Bernard, D; Verderi, M; Playfer, S; Bettoni, D; Bozzi, C; Calabrese, R; Cibinetto, G; Fioravanti, E; Garzia, I; Luppi, E; Piemontese, L; Santoro, V; Calcaterra, A; de Sangro, R; Finocchiaro, G; Martellotti, S; Patteri, P; Peruzzi, I M; Piccolo, M; Rama, M; Zallo, A; Contri, R; Lo Vetere, M; Monge, M R; Passaggio, S; Patrignani, C; Robutti, E; Bhuyan, B; Prasad, V; Adametz, A; Uwer, U; Lacker, H M; Dauncey, P D; Mallik, U; Chen, C; Cochran, J; Prell, S; Ahmed, H; Gritsan, A V; Arnaud, N; Davier, M; Derkach, D; Grosdidier, G; Le Diberder, F; Lutz, A M; Malaescu, B; Roudeau, P; Stocchi, A; Wormser, G; Lange, D J; Wright, D M; Coleman, J P; Fry, J R; Gabathuler, E; Hutchcroft, D E; Payne, D J; Touramanis, C; Bevan, A J; Di Lodovico, F; Sacco, R; Cowan, G; Bougher, J; Brown, D N; Davis, C L; Denig, A G; Fritsch, M; Gradl, W; Griessinger, K; Hafner, A; Schubert, K R; Barlow, R J; Lafferty, G D; Cenci, R; Hamilton, B; Jawahery, A; Roberts, D A; Cowan, R; Sciolla, G; Cheaib, R; Patel, P M; Robertson, S H; Neri, N; Palombo, F; Cremaldi, L; Godang, R; Sonnek, P; Summers, D J; Simard, M; Taras, P; De Nardo, G; Onorato, G; Sciacca, C; Martinelli, M; Raven, G; Jessop, C P; LoSecco, J M; Honscheid, K; Kass, R; Feltresi, E; Margoni, M; Morandin, M; Posocco, M; Rotondo, M; Simi, G; Simonetto, F; Stroili, R; Akar, S; Ben-Haim, E; Bomben, M; Bonneaud, G R; Briand, H; Calderini, G; Chauveau, J; Leruste, Ph; Marchiori, G; Ocariz, J; Biasini, M; Manoni, E; Pacetti, S; Rossi, A; Angelini, C; Batignani, G; Bettarini, S; Carpinelli, M; Casarosa, G; Cervelli, A; Chrzaszcz, M; Forti, F; Giorgi, M A; Lusiani, A; Oberhof, B; Paoloni, E; Perez, A; Rizzo, G; Walsh, J J; Lopes Pegna, D; Olsen, J; Smith, A J S; Faccini, R; Ferrarotto, F; Ferroni, F; Gaspero, M; Li Gioi, L; Pilloni, A; Piredda, G; Bünger, C; Dittrich, S; Grünberg, O; Hess, M; Leddig, T; Voß, C; Waldi, R; Adye, T; Olaiya, E O; Wilson, F F; Emery, S; Vasseur, G; Anulli, F; Aston, D; Bard, D J; Cartaro, C; Convery, M R; Dorfan, J; Dubois-Felsmann, G P; Dunwoodie, W; Ebert, M; Field, R C; Fulsom, B G; Graham, M T; Hast, C; Innes, W R; Kim, P; Leith, D W G S; Lewis, P; Lindemann, D; Luitz, S; Luth, V; Lynch, H L; MacFarlane, D B; Muller, D R; Neal, H; Perl, M; Pulliam, T; Ratcliff, B N; Roodman, A; Salnikov, A A; Schindler, R H; Snyder, A; Su, D; Sullivan, M K; Va'vra, J; Wisniewski, W J; Wulsin, H W; Purohit, M V; White, R M; Wilson, J R; Randle-Conde, A; Sekula, S J; Bellis, M; Burchat, P R; Puccio, E M T; Alam, M S; Ernst, J A; Gorodeisky, R; Guttman, N; Peimer, D R; Soffer, A; Spanier, S M; Ritchie, J L; Ruland, A M; Schwitters, R F; Wray, B C; Izen, J M; Lou, X C; Bianchi, F; De Mori, F; Filippi, A; Gamba, D; Lanceri, L; Vitale, L; Martinez-Vidal, F; Oyanguren, A; Villanueva-Perez, P; Albert, J; Banerjee, Sw; Beaulieu, A; Bernlochner, F U; Choi, H H F; King, G J; Kowalewski, R; Lewczuk, M J; Lueck, T; Nugent, I M; Roney, J M; Sobie, R J; Tasneem, N; Gershon, T J; Harrison, P F; Latham, T E; Band, H R; Dasu, S; Pan, Y; Prepost, R; Wu, S L

    2015-02-27

    We present a measurement of the asymmetry A_{CP} between same-sign inclusive dilepton samples ℓ^{+}ℓ^{+} and ℓ^{-}ℓ^{-} (ℓ=e, μ) from semileptonic B decays in ϒ(4S)→BB[over ¯] events, using the complete data set recorded by the BABAR experiment near the ϒ(4S) resonance, corresponding to 471×10^{6} BB[over ¯] pairs. The asymmetry A_{CP} allows comparison between the mixing probabilities P(B[over ¯]^{0}→B^{0}) and P(B^{0}→B[over ¯]^{0}), and therefore probes CP and T violation. The result, A_{CP}=[-3.9±3.5(stat)±1.9(syst)]×10^{-3}, is consistent with the standard model expectation.

  13. SaRAD: a Simple and Robust Abbreviation Dictionary.

    PubMed

    Adar, Eytan

    2004-03-01

    Due to recent interest in the use of textual material to augment traditional experiments it has become necessary to automatically cluster, classify and filter natural language information. The Simple and Robust Abbreviation Dictionary (SaRAD) provides an easy to implement, high performance tool for the construction of a biomedical symbol dictionary. The algorithms, applied to the MEDLINE document set, result in a high quality dictionary and toolset to disambiguate abbreviation symbols automatically.

  14. University of Glasgow at TREC 2008: Experiments in Blog, Enterprise, and Relevance Feedback Tracks with Terrier

    DTIC Science & Technology

    2008-11-01

    improves our TREC 2007 dictionary -based approach by automatically building an internal opinion dictionary from the collection itself. We measure the opin...detecting opinionated documents. The first approach improves our TREC 2007 dictionary -based approach by automat- ically building an internal opinion... dictionary from the collection itself. The second approach is based on the OpinionFinder tool, which identifies subjective sentences in text. In particular

  15. The Effect of Bilingual Term List Size on Dictionary-Based Cross-Language Information Retrieval

    DTIC Science & Technology

    2006-01-01

    The Effect of Bilingual Term List Size on Dictionary -Based Cross-Language Information Retrieval Dina Demner-Fushman Department of Computer Science... dictionary -based Cross-Language Information Retrieval (CLIR), in which the goal is to find documents written in one natural language based on queries that...in which the documents are written. In dictionary -based CLIR techniques, the princi- pal source of translation knowledge is a translation lexicon

  16. Robust Multimodal Dictionary Learning

    PubMed Central

    Cao, Tian; Jojic, Vladimir; Modla, Shannon; Powell, Debbie; Czymmek, Kirk; Niethammer, Marc

    2014-01-01

    We propose a robust multimodal dictionary learning method for multimodal images. Joint dictionary learning for both modalities may be impaired by lack of correspondence between image modalities in training data, for example due to areas of low quality in one of the modalities. Dictionaries learned with such non-corresponding data will induce uncertainty about image representation. In this paper, we propose a probabilistic model that accounts for image areas that are poorly corresponding between the image modalities. We cast the problem of learning a dictionary in presence of problematic image patches as a likelihood maximization problem and solve it with a variant of the EM algorithm. Our algorithm iterates identification of poorly corresponding patches and re-finements of the dictionary. We tested our method on synthetic and real data. We show improvements in image prediction quality and alignment accuracy when using the method for multimodal image registration. PMID:24505674

  17. Evaluation of techniques for increasing recall in a dictionary approach to gene and protein name identification.

    PubMed

    Schuemie, Martijn J; Mons, Barend; Weeber, Marc; Kors, Jan A

    2007-06-01

    Gene and protein name identification in text requires a dictionary approach to relate synonyms to the same gene or protein, and to link names to external databases. However, existing dictionaries are incomplete. We investigate two complementary methods for automatic generation of a comprehensive dictionary: combination of information from existing gene and protein databases and rule-based generation of spelling variations. Both methods have been reported in literature before, but have hitherto not been combined and evaluated systematically. We combined gene and protein names from several existing databases of four different organisms. The combined dictionaries showed a substantial increase in recall on three different test sets, as compared to any single database. Application of 23 spelling variation rules to the combined dictionaries further increased recall. However, many rules appeared to have no effect and some appear to have a detrimental effect on precision.

  18. Dictionary Learning on the Manifold of Square Root Densities and Application to Reconstruction of Diffusion Propagator Fields*

    PubMed Central

    Sun, Jiaqi; Xie, Yuchen; Ye, Wenxing; Ho, Jeffrey; Entezari, Alireza; Blackband, Stephen J.

    2013-01-01

    In this paper, we present a novel dictionary learning framework for data lying on the manifold of square root densities and apply it to the reconstruction of diffusion propagator (DP) fields given a multi-shell diffusion MRI data set. Unlike most of the existing dictionary learning algorithms which rely on the assumption that the data points are vectors in some Euclidean space, our dictionary learning algorithm is designed to incorporate the intrinsic geometric structure of manifolds and performs better than traditional dictionary learning approaches when applied to data lying on the manifold of square root densities. Non-negativity as well as smoothness across the whole field of the reconstructed DPs is guaranteed in our approach. We demonstrate the advantage of our approach by comparing it with an existing dictionary based reconstruction method on synthetic and real multi-shell MRI data. PMID:24684004

  19. A dictionary server for supplying context sensitive medical knowledge.

    PubMed Central

    Ruan, W.; Bürkle, T.; Dudeck, J.

    2000-01-01

    The Giessen Data Dictionary Server (GDDS), developed at Giessen University Hospital, integrates clinical systems with on-line, context sensitive medical knowledge to help with making medical decisions. By "context" we mean the clinical information that is being presented at the moment the information need is occurring. The dictionary server makes use of a semantic network supported by a medical data dictionary to link terms from clinical applications to their proper information sources. It has been designed to analyze the network structure itself instead of knowing the layout of the semantic net in advance. This enables us to map appropriate information sources to various clinical applications, such as nursing documentation, drug prescription and cancer follow up systems. This paper describes the function of the dictionary server and shows how the knowledge stored in the semantic network is used in the dictionary service. PMID:11079978

  20. Plasma Dictionary Website

    NASA Astrophysics Data System (ADS)

    Correll, Don; Heeter, Robert; Alvarez, Mitch

    2000-10-01

    In response to many inquiries for a list of plasma terms, a database driven Plasma Dictionary website (plasmadictionary.llnl.gov) was created that allows users to submit new terms, search for specific terms or browse alphabetic listings. The Plasma Dictionary website contents began with the Fusion & Plasma Glossary terms available at the Fusion Energy Educational website (fusedweb.llnl.gov). Plasma researchers are encouraged to add terms and definitions. By clarifying the meanings of specific plasma terms, it is envisioned that the primary use of the Plasma Dictionary website will be by students, teachers, researchers, and writers for (1) Enhancing literacy in plasma science, (2) Serving as an educational aid, (3) Providing practical information, and (4) Helping clarify plasma writings. The Plasma Dictionary website has already proved useful in responding to a request from the CRC Press (www.crcpress.com) to add plasma terms to its CRC physics dictionary project (members.aol.com/physdict/).

  1. A Standard-Driven Data Dictionary for Data Harmonization of Heterogeneous Datasets in Urban Geological Information Systems

    NASA Astrophysics Data System (ADS)

    Liu, G.; Wu, C.; Li, X.; Song, P.

    2013-12-01

    The 3D urban geological information system has been a major part of the national urban geological survey project of China Geological Survey in recent years. Large amount of multi-source and multi-subject data are to be stored in the urban geological databases. There are various models and vocabularies drafted and applied by industrial companies in urban geological data. The issues such as duplicate and ambiguous definition of terms and different coding structure increase the difficulty of information sharing and data integration. To solve this problem, we proposed a national standard-driven information classification and coding method to effectively store and integrate urban geological data, and we applied the data dictionary technology to achieve structural and standard data storage. The overall purpose of this work is to set up a common data platform to provide information sharing service. Research progresses are as follows: (1) A unified classification and coding method for multi-source data based on national standards. Underlying national standards include GB 9649-88 for geology and GB/T 13923-2006 for geography. Current industrial models are compared with national standards to build a mapping table. The attributes of various urban geological data entity models are reduced to several categories according to their application phases and domains. Then a logical data model is set up as a standard format to design data file structures for a relational database. (2) A multi-level data dictionary for data standardization constraint. Three levels of data dictionary are designed: model data dictionary is used to manage system database files and enhance maintenance of the whole database system; attribute dictionary organizes fields used in database tables; term and code dictionary is applied to provide a standard for urban information system by adopting appropriate classification and coding methods; comprehensive data dictionary manages system operation and security. (3) An extension to system data management function based on data dictionary. Data item constraint input function is making use of the standard term and code dictionary to get standard input result. Attribute dictionary organizes all the fields of an urban geological information database to ensure the consistency of term use for fields. Model dictionary is used to generate a database operation interface automatically with standard semantic content via term and code dictionary. The above method and technology have been applied to the construction of Fuzhou Urban Geological Information System, South-East China with satisfactory results.

  2. Learning overcomplete representations from distributed data: a brief review

    NASA Astrophysics Data System (ADS)

    Raja, Haroon; Bajwa, Waheed U.

    2016-05-01

    Most of the research on dictionary learning has focused on developing algorithms under the assumption that data is available at a centralized location. But often the data is not available at a centralized location due to practical constraints like data aggregation costs, privacy concerns, etc. Using centralized dictionary learning algorithms may not be the optimal choice in such settings. This motivates the design of dictionary learning algorithms that consider distributed nature of data as one of the problem variables. Just like centralized settings, distributed dictionary learning problem can be posed in more than one way depending on the problem setup. Most notable distinguishing features are the online versus batch nature of data and the representative versus discriminative nature of the dictionaries. In this paper, several distributed dictionary learning algorithms that are designed to tackle different problem setups are reviewed. One of these algorithms is cloud K-SVD, which solves the dictionary learning problem for batch data in distributed settings. One distinguishing feature of cloud K-SVD is that it has been shown to converge to its centralized counterpart, namely, the K-SVD solution. On the other hand, no such guarantees are provided for other distributed dictionary learning algorithms. Convergence of cloud K-SVD to the centralized K-SVD solution means problems that are solvable by K-SVD in centralized settings can now be solved in distributed settings with similar performance. Finally, cloud K-SVD is used as an example to show the advantages that are attainable by deploying distributed dictionary algorithms for real world distributed datasets.

  3. Change detection of medical images using dictionary learning techniques and PCA

    NASA Astrophysics Data System (ADS)

    Nika, Varvara; Babyn, Paul; Zhu, Hongmei

    2014-03-01

    Automatic change detection methods for identifying the changes of serial MR images taken at different times are of great interest to radiologists. The majority of existing change detection methods in medical imaging, and those of brain images in particular, include many preprocessing steps and rely mostly on statistical analysis of MRI scans. Although most methods utilize registration software, tissue classification remains a difficult and overwhelming task. Recently, dictionary learning techniques are used in many areas of image processing, such as image surveillance, face recognition, remote sensing, and medical imaging. In this paper we present the Eigen-Block Change Detection algorithm (EigenBlockCD). It performs local registration and identifies the changes between consecutive MR images of the brain. Blocks of pixels from baseline scan are used to train local dictionaries that are then used to detect changes in the follow-up scan. We use PCA to reduce the dimensionality of the local dictionaries and the redundancy of data. Choosing the appropriate distance measure significantly affects the performance of our algorithm. We examine the differences between L1 and L2 norms as two possible similarity measures in the EigenBlockCD. We show the advantages of L2 norm over L1 norm theoretically and numerically. We also demonstrate the performance of the EigenBlockCD algorithm for detecting changes of MR images and compare our results with those provided in recent literature. Experimental results with both simulated and real MRI scans show that the EigenBlockCD outperforms the previous methods. It detects clinical changes while ignoring the changes due to patient's position and other acquisition artifacts.

  4. Recovery of sparse translation-invariant signals with continuous basis pursuit

    PubMed Central

    Ekanadham, Chaitanya; Tranchina, Daniel; Simoncelli, Eero

    2013-01-01

    We consider the problem of decomposing a signal into a linear combination of features, each a continuously translated version of one of a small set of elementary features. Although these constituents are drawn from a continuous family, most current signal decomposition methods rely on a finite dictionary of discrete examples selected from this family (e.g., shifted copies of a set of basic waveforms), and apply sparse optimization methods to select and solve for the relevant coefficients. Here, we generate a dictionary that includes auxiliary interpolation functions that approximate translates of features via adjustment of their coefficients. We formulate a constrained convex optimization problem, in which the full set of dictionary coefficients represents a linear approximation of the signal, the auxiliary coefficients are constrained so as to only represent translated features, and sparsity is imposed on the primary coefficients using an L1 penalty. The basis pursuit denoising (BP) method may be seen as a special case, in which the auxiliary interpolation functions are omitted, and we thus refer to our methodology as continuous basis pursuit (CBP). We develop two implementations of CBP for a one-dimensional translation-invariant source, one using a first-order Taylor approximation, and another using a form of trigonometric spline. We examine the tradeoff between sparsity and signal reconstruction accuracy in these methods, demonstrating empirically that trigonometric CBP substantially outperforms Taylor CBP, which in turn offers substantial gains over ordinary BP. In addition, the CBP bases can generally achieve equally good or better approximations with much coarser sampling than BP, leading to a reduction in dictionary dimensionality. PMID:24352562

  5. Clutter Mitigation in Echocardiography Using Sparse Signal Separation

    PubMed Central

    Yavneh, Irad

    2015-01-01

    In ultrasound imaging, clutter artifacts degrade images and may cause inaccurate diagnosis. In this paper, we apply a method called Morphological Component Analysis (MCA) for sparse signal separation with the objective of reducing such clutter artifacts. The MCA approach assumes that the two signals in the additive mix have each a sparse representation under some dictionary of atoms (a matrix), and separation is achieved by finding these sparse representations. In our work, an adaptive approach is used for learning the dictionary from the echo data. MCA is compared to Singular Value Filtering (SVF), a Principal Component Analysis- (PCA-) based filtering technique, and to a high-pass Finite Impulse Response (FIR) filter. Each filter is applied to a simulated hypoechoic lesion sequence, as well as experimental cardiac ultrasound data. MCA is demonstrated in both cases to outperform the FIR filter and obtain results comparable to the SVF method in terms of contrast-to-noise ratio (CNR). Furthermore, MCA shows a lower impact on tissue sections while removing the clutter artifacts. In experimental heart data, MCA obtains in our experiments clutter mitigation with an average CNR improvement of 1.33 dB. PMID:26199622

  6. Dictionary of cotton: Picking & ginning

    USDA-ARS?s Scientific Manuscript database

    Cotton is an essential commodity for textiles and has long been an important item of trade in the world’s economy. Cotton is currently grown in over 100 countries by an estimated 100 producers. The basic unit of the cotton trade is the cotton bale which consists of approximately 500 pounds of raw c...

  7. Automotive and Power Service: Cluster Guide.

    ERIC Educational Resources Information Center

    Michigan State Dept. of Education, Lansing. Special Needs Program.

    This teacher's guide is one of a series of publications focusing on the occupational preparation of persons with special education needs. The material was developed and tested by cooperating teachers over a period of three years. Task analysis information is presented using occupational descriptions from the Dictionary of Occupational Titles,…

  8. Food Preparation and Service: Cluster Guide.

    ERIC Educational Resources Information Center

    Central Michigan Univ., Mount Pleasant.

    This teacher's guide is one of a series of publications focusing on the occupational preparation of persons with special education needs. The material was developed and tested by cooperating teachers over a period of three years. Task analysis information is presented using occupational descriptions from the Dictionary of Occupational Titles,…

  9. First Comparison of Remote Vertical Profiles of Refractory Black Carbon between the Atlantic and Pacific Basins on Global Scales

    NASA Astrophysics Data System (ADS)

    Katich, J. M.; Schwarz, J. P.

    2016-12-01

    The NASA Atmospheric Tomography Mission (ATom) provides a first opportunity to obtain vertical profiles of refractory black carbon (rBC) mass mixing ratios over global scale ( 65S - 85 N latitude) in the remote atmosphere over both the Pacific and Atlantic basins. A NOAA single-particle soot photometer (SP2) will fly on the NASA DC-8 research aircraft over July/August of 2016, obtaining near- continuous vertical profiling ( 0.3 to 12 km) over most of the Earth's latitude range, akin to the NSF HIPPO campaign that occurred only over the Pacific basin during 2009-2011. HIPPO analysis suggested both that high altitude rBC mass mixing ratios (MMRs) were likely zonally well mixed, and that global model estimates of remote rBC MMR throughout the upper troposphere globally, and not just over the Pacific, were likely biased high. Here we will present an initial analysis of the new, more complete data set in which Atlantic rBC profiles will be used to assess these prior suppositions.

  10. Dictionnaires et encyclopedies: cuvee 89 (Dictionaries and Encyclopedias: Vintage 89).

    ERIC Educational Resources Information Center

    Ibrahim, Amr Helmy

    1989-01-01

    For the first time since its initial publication in 1905, the much-imitated "Petit Larousse" dictionary/reference book has a true competitor in Hachette's "Le Dictionnaire de notre temps", a new dictionary reflecting modern French usage. (MSE)

  11. Dictionary-learning-based reconstruction method for electron tomography.

    PubMed

    Liu, Baodong; Yu, Hengyong; Verbridge, Scott S; Sun, Lizhi; Wang, Ge

    2014-01-01

    Electron tomography usually suffers from so-called “missing wedge” artifacts caused by limited tilt angle range. An equally sloped tomography (EST) acquisition scheme (which should be called the linogram sampling scheme) was recently applied to achieve 2.4-angstrom resolution. On the other hand, a compressive sensing inspired reconstruction algorithm, known as adaptive dictionary based statistical iterative reconstruction (ADSIR), has been reported for X-ray computed tomography. In this paper, we evaluate the EST, ADSIR, and an ordered-subset simultaneous algebraic reconstruction technique (OS-SART), and compare the ES and equally angled (EA) data acquisition modes. Our results show that OS-SART is comparable to EST, and the ADSIR outperforms EST and OS-SART. Furthermore, the equally sloped projection data acquisition mode has no advantage over the conventional equally angled mode in this context.

  12. Terminological reference of a knowledge-based system: the data dictionary.

    PubMed

    Stausberg, J; Wormek, A; Kraut, U

    1995-01-01

    The development of open and integrated knowledge bases makes new demands on the definition of the used terminology. The definition should be realized in a data dictionary separated from the knowledge base. Within the works done at a reference model of medical knowledge, a data dictionary has been developed and used in different applications: a term definition shell, a documentation tool and a knowledge base. The data dictionary includes that part of terminology, which is largely independent of a certain knowledge model. For that reason, the data dictionary can be used as a basis for integrating knowledge bases into information systems, for knowledge sharing and reuse and for modular development of knowledge-based systems.

  13. Compressed sampling and dictionary learning framework for wavelength-division-multiplexing-based distributed fiber sensing.

    PubMed

    Weiss, Christian; Zoubir, Abdelhak M

    2017-05-01

    We propose a compressed sampling and dictionary learning framework for fiber-optic sensing using wavelength-tunable lasers. A redundant dictionary is generated from a model for the reflected sensor signal. Imperfect prior knowledge is considered in terms of uncertain local and global parameters. To estimate a sparse representation and the dictionary parameters, we present an alternating minimization algorithm that is equipped with a preprocessing routine to handle dictionary coherence. The support of the obtained sparse signal indicates the reflection delays, which can be used to measure impairments along the sensing fiber. The performance is evaluated by simulations and experimental data for a fiber sensor system with common core architecture.

  14. Physics of Colloids in Space (PCS): Microgravity Experiment Completed Operations on the International Space Station

    NASA Technical Reports Server (NTRS)

    Doherty, Michael P.; Sankaran, Subramanian

    2003-01-01

    Immediately after mixing, the two-phase-like colloid-polymer critical point sample begins to phase separate, or de-mix, into two phases-one that resembles a gas and one that resembles a liquid, except that the particles are colloids and not atoms. The colloid-poor black regions (colloidal gas) grow bigger, and the colloid-rich white regions (colloidal liquid) become whiter as the domains further coarsen. Finally, complete phase separation is achieved, that is, just one region of each colloid-rich (white) and colloid-poor (black) phase. This process was studied over four decades of length scale, from 1 micrometer to 1 centimeter.

  15. Sparse SPM: Group Sparse-dictionary learning in SPM framework for resting-state functional connectivity MRI analysis.

    PubMed

    Lee, Young-Beom; Lee, Jeonghyeon; Tak, Sungho; Lee, Kangjoo; Na, Duk L; Seo, Sang Won; Jeong, Yong; Ye, Jong Chul

    2016-01-15

    Recent studies of functional connectivity MR imaging have revealed that the default-mode network activity is disrupted in diseases such as Alzheimer's disease (AD). However, there is not yet a consensus on the preferred method for resting-state analysis. Because the brain is reported to have complex interconnected networks according to graph theoretical analysis, the independency assumption, as in the popular independent component analysis (ICA) approach, often does not hold. Here, rather than using the independency assumption, we present a new statistical parameter mapping (SPM)-type analysis method based on a sparse graph model where temporal dynamics at each voxel position are described as a sparse combination of global brain dynamics. In particular, a new concept of a spatially adaptive design matrix has been proposed to represent local connectivity that shares the same temporal dynamics. If we further assume that local network structures within a group are similar, the estimation problem of global and local dynamics can be solved using sparse dictionary learning for the concatenated temporal data across subjects. Moreover, under the homoscedasticity variance assumption across subjects and groups that is often used in SPM analysis, the aforementioned individual and group analyses using sparse dictionary learning can be accurately modeled by a mixed-effect model, which also facilitates a standard SPM-type group-level inference using summary statistics. Using an extensive resting fMRI data set obtained from normal, mild cognitive impairment (MCI), and Alzheimer's disease patient groups, we demonstrated that the changes in the default mode network extracted by the proposed method are more closely correlated with the progression of Alzheimer's disease. Copyright © 2015 Elsevier Inc. All rights reserved.

  16. The T.M.R. Data Dictionary: A Management Tool for Data Base Design

    PubMed Central

    Ostrowski, Maureen; Bernes, Marshall R.

    1984-01-01

    In January 1981, a dictionary-driven ambulatory care information system known as TMR (The Medical Record) was installed at a large private medical group practice in Los Angeles. TMR's data dictionary has enabled the medical group to adapt the software to meet changing user needs largely without programming support. For top management, the dictionary is also a tool for navigating through the system's complexity and assuring the integrity of management goals.

  17. Sensitivity computation of the ell1 minimization problem and its application to dictionary design of ill-posed problems

    NASA Astrophysics Data System (ADS)

    Horesh, L.; Haber, E.

    2009-09-01

    The ell1 minimization problem has been studied extensively in the past few years. Recently, there has been a growing interest in its application for inverse problems. Most studies have concentrated in devising ways for sparse representation of a solution using a given prototype dictionary. Very few studies have addressed the more challenging problem of optimal dictionary construction, and even these were primarily devoted to the simplistic sparse coding application. In this paper, sensitivity analysis of the inverse solution with respect to the dictionary is presented. This analysis reveals some of the salient features and intrinsic difficulties which are associated with the dictionary design problem. Equipped with these insights, we propose an optimization strategy that alleviates these hurdles while utilizing the derived sensitivity relations for the design of a locally optimal dictionary. Our optimality criterion is based on local minimization of the Bayesian risk, given a set of training models. We present a mathematical formulation and an algorithmic framework to achieve this goal. The proposed framework offers the design of dictionaries for inverse problems that incorporate non-trivial, non-injective observation operators, where the data and the recovered parameters may reside in different spaces. We test our algorithm and show that it yields improved dictionaries for a diverse set of inverse problems in geophysics and medical imaging.

  18. Password-Only Authenticated Three-Party Key Exchange Proven Secure against Insider Dictionary Attacks

    PubMed Central

    Nam, Junghyun; Choo, Kim-Kwang Raymond

    2014-01-01

    While a number of protocols for password-only authenticated key exchange (PAKE) in the 3-party setting have been proposed, it still remains a challenging task to prove the security of a 3-party PAKE protocol against insider dictionary attacks. To the best of our knowledge, there is no 3-party PAKE protocol that carries a formal proof, or even definition, of security against insider dictionary attacks. In this paper, we present the first 3-party PAKE protocol proven secure against both online and offline dictionary attacks as well as insider and outsider dictionary attacks. Our construct can be viewed as a protocol compiler that transforms any 2-party PAKE protocol into a 3-party PAKE protocol with 2 additional rounds of communication. We also present a simple and intuitive approach of formally modelling dictionary attacks in the password-only 3-party setting, which significantly reduces the complexity of proving the security of 3-party PAKE protocols against dictionary attacks. In addition, we investigate the security of the well-known 3-party PAKE protocol, called GPAKE, due to Abdalla et al. (2005, 2006), and demonstrate that the security of GPAKE against online dictionary attacks depends heavily on the composition of its two building blocks, namely a 2-party PAKE protocol and a 3-party key distribution protocol. PMID:25309956

  19. Image fusion via nonlocal sparse K-SVD dictionary learning.

    PubMed

    Li, Ying; Li, Fangyi; Bai, Bendu; Shen, Qiang

    2016-03-01

    Image fusion aims to merge two or more images captured via various sensors of the same scene to construct a more informative image by integrating their details. Generally, such integration is achieved through the manipulation of the representations of the images concerned. Sparse representation plays an important role in the effective description of images, offering a great potential in a variety of image processing tasks, including image fusion. Supported by sparse representation, in this paper, an approach for image fusion by the use of a novel dictionary learning scheme is proposed. The nonlocal self-similarity property of the images is exploited, not only at the stage of learning the underlying description dictionary but during the process of image fusion. In particular, the property of nonlocal self-similarity is combined with the traditional sparse dictionary. This results in an improved learned dictionary, hereafter referred to as the nonlocal sparse K-SVD dictionary (where K-SVD stands for the K times singular value decomposition that is commonly used in the literature), and abbreviated to NL_SK_SVD. The performance of the NL_SK_SVD dictionary is applied for image fusion using simultaneous orthogonal matching pursuit. The proposed approach is evaluated with different types of images, and compared with a number of alternative image fusion techniques. The resultant superior fused images using the present approach demonstrates the efficacy of the NL_SK_SVD dictionary in sparse image representation.

  20. Dictionary as Database.

    ERIC Educational Resources Information Center

    Painter, Derrick

    1996-01-01

    Discussion of dictionaries as databases focuses on the digitizing of The Oxford English dictionary (OED) and the use of Standard Generalized Mark-Up Language (SGML). Topics include the creation of a consortium to digitize the OED, document structure, relational databases, text forms, sequence, and discourse. (LRW)

  1. Regularized spherical polar fourier diffusion MRI with optimal dictionary learning.

    PubMed

    Cheng, Jian; Jiang, Tianzi; Deriche, Rachid; Shen, Dinggang; Yap, Pew-Thian

    2013-01-01

    Compressed Sensing (CS) takes advantage of signal sparsity or compressibility and allows superb signal reconstruction from relatively few measurements. Based on CS theory, a suitable dictionary for sparse representation of the signal is required. In diffusion MRI (dMRI), CS methods proposed for reconstruction of diffusion-weighted signal and the Ensemble Average Propagator (EAP) utilize two kinds of Dictionary Learning (DL) methods: 1) Discrete Representation DL (DR-DL), and 2) Continuous Representation DL (CR-DL). DR-DL is susceptible to numerical inaccuracy owing to interpolation and regridding errors in a discretized q-space. In this paper, we propose a novel CR-DL approach, called Dictionary Learning - Spherical Polar Fourier Imaging (DL-SPFI) for effective compressed-sensing reconstruction of the q-space diffusion-weighted signal and the EAP. In DL-SPFI, a dictionary that sparsifies the signal is learned from the space of continuous Gaussian diffusion signals. The learned dictionary is then adaptively applied to different voxels using a weighted LASSO framework for robust signal reconstruction. Compared with the start-of-the-art CR-DL and DR-DL methods proposed by Merlet et al. and Bilgic et al., respectively, our work offers the following advantages. First, the learned dictionary is proved to be optimal for Gaussian diffusion signals. Second, to our knowledge, this is the first work to learn a voxel-adaptive dictionary. The importance of the adaptive dictionary in EAP reconstruction will be demonstrated theoretically and empirically. Third, optimization in DL-SPFI is only performed in a small subspace resided by the SPF coefficients, as opposed to the q-space approach utilized by Merlet et al. We experimentally evaluated DL-SPFI with respect to L1-norm regularized SPFI (L1-SPFI), which uses the original SPF basis, and the DR-DL method proposed by Bilgic et al. The experiment results on synthetic and real data indicate that the learned dictionary produces sparser coefficients than the original SPF basis and results in significantly lower reconstruction error than Bilgic et al.'s method.

  2. Brain tumor classification and segmentation using sparse coding and dictionary learning.

    PubMed

    Salman Al-Shaikhli, Saif Dawood; Yang, Michael Ying; Rosenhahn, Bodo

    2016-08-01

    This paper presents a novel fully automatic framework for multi-class brain tumor classification and segmentation using a sparse coding and dictionary learning method. The proposed framework consists of two steps: classification and segmentation. The classification of the brain tumors is based on brain topology and texture. The segmentation is based on voxel values of the image data. Using K-SVD, two types of dictionaries are learned from the training data and their associated ground truth segmentation: feature dictionary and voxel-wise coupled dictionaries. The feature dictionary consists of global image features (topological and texture features). The coupled dictionaries consist of coupled information: gray scale voxel values of the training image data and their associated label voxel values of the ground truth segmentation of the training data. For quantitative evaluation, the proposed framework is evaluated using different metrics. The segmentation results of the brain tumor segmentation (MICCAI-BraTS-2013) database are evaluated using five different metric scores, which are computed using the online evaluation tool provided by the BraTS-2013 challenge organizers. Experimental results demonstrate that the proposed approach achieves an accurate brain tumor classification and segmentation and outperforms the state-of-the-art methods.

  3. Measurement of negativity bias in personal narratives using corpus-based emotion dictionaries.

    PubMed

    Cohen, Shuki J

    2011-04-01

    This study presents a novel methodology for the measurement of negativity bias using positive and negative dictionaries of emotion words applied to autobiographical narratives. At odds with the cognitive theory of mood dysregulation, previous text-analytical studies have failed to find significant correlation between emotion dictionaries and negative affectivity or dysphoria. In the present study, an a priori list dictionary of emotion words was refined based on the actual use of these words in personal narratives collected from close to 500 college students. Half of the corpus was used to construct, via concordance analysis, the grammatical structures associated with the words in their emotional sense. The second half of the corpus served as a validation corpus. The resulting dictionary ignores words that are not used in their intended emotional sense, including negated emotions, homophones, frozen idioms etc. Correlations of the resulting corpus-based negative and positive emotion dictionaries with self-report measures of negative affectivity were in the expected direction, and were statistically significant, with medium effect size. The potential use of these dictionaries as implicit measures of negativity bias and in the analysis of psychotherapy transcripts is discussed.

  4. Joint seismic data denoising and interpolation with double-sparsity dictionary learning

    NASA Astrophysics Data System (ADS)

    Zhu, Lingchen; Liu, Entao; McClellan, James H.

    2017-08-01

    Seismic data quality is vital to geophysical applications, so that methods of data recovery, including denoising and interpolation, are common initial steps in the seismic data processing flow. We present a method to perform simultaneous interpolation and denoising, which is based on double-sparsity dictionary learning. This extends previous work that was for denoising only. The original double-sparsity dictionary learning algorithm is modified to track the traces with missing data by defining a masking operator that is integrated into the sparse representation of the dictionary. A weighted low-rank approximation algorithm is adopted to handle the dictionary updating as a sparse recovery optimization problem constrained by the masking operator. Compared to traditional sparse transforms with fixed dictionaries that lack the ability to adapt to complex data structures, the double-sparsity dictionary learning method learns the signal adaptively from selected patches of the corrupted seismic data, while preserving compact forward and inverse transform operators. Numerical experiments on synthetic seismic data indicate that this new method preserves more subtle features in the data set without introducing pseudo-Gibbs artifacts when compared to other directional multi-scale transform methods such as curvelets.

  5. Intelligent Diagnosis Method for Rotating Machinery Using Dictionary Learning and Singular Value Decomposition.

    PubMed

    Han, Te; Jiang, Dongxiang; Zhang, Xiaochen; Sun, Yankui

    2017-03-27

    Rotating machinery is widely used in industrial applications. With the trend towards more precise and more critical operating conditions, mechanical failures may easily occur. Condition monitoring and fault diagnosis (CMFD) technology is an effective tool to enhance the reliability and security of rotating machinery. In this paper, an intelligent fault diagnosis method based on dictionary learning and singular value decomposition (SVD) is proposed. First, the dictionary learning scheme is capable of generating an adaptive dictionary whose atoms reveal the underlying structure of raw signals. Essentially, dictionary learning is employed as an adaptive feature extraction method regardless of any prior knowledge. Second, the singular value sequence of learned dictionary matrix is served to extract feature vector. Generally, since the vector is of high dimensionality, a simple and practical principal component analysis (PCA) is applied to reduce dimensionality. Finally, the K -nearest neighbor (KNN) algorithm is adopted for identification and classification of fault patterns automatically. Two experimental case studies are investigated to corroborate the effectiveness of the proposed method in intelligent diagnosis of rotating machinery faults. The comparison analysis validates that the dictionary learning-based matrix construction approach outperforms the mode decomposition-based methods in terms of capacity and adaptability for feature extraction.

  6. Bilevel Model-Based Discriminative Dictionary Learning for Recognition.

    PubMed

    Zhou, Pan; Zhang, Chao; Lin, Zhouchen

    2017-03-01

    Most supervised dictionary learning methods optimize the combinations of reconstruction error, sparsity prior, and discriminative terms. Thus, the learnt dictionaries may not be optimal for recognition tasks. Also, the sparse codes learning models in the training and the testing phases are inconsistent. Besides, without utilizing the intrinsic data structure, many dictionary learning methods only employ the l 0 or l 1 norm to encode each datum independently, limiting the performance of the learnt dictionaries. We present a novel bilevel model-based discriminative dictionary learning method for recognition tasks. The upper level directly minimizes the classification error, while the lower level uses the sparsity term and the Laplacian term to characterize the intrinsic data structure. The lower level is subordinate to the upper level. Therefore, our model achieves an overall optimality for recognition in that the learnt dictionary is directly tailored for recognition. Moreover, the sparse codes learning models in the training and the testing phases can be the same. We further propose a novel method to solve our bilevel optimization problem. It first replaces the lower level with its Karush-Kuhn-Tucker conditions and then applies the alternating direction method of multipliers to solve the equivalent problem. Extensive experiments demonstrate the effectiveness and robustness of our method.

  7. Compressive sensing of electrocardiogram signals by promoting sparsity on the second-order difference and by using dictionary learning.

    PubMed

    Pant, Jeevan K; Krishnan, Sridhar

    2014-04-01

    A new algorithm for the reconstruction of electrocardiogram (ECG) signals and a dictionary learning algorithm for the enhancement of its reconstruction performance for a class of signals are proposed. The signal reconstruction algorithm is based on minimizing the lp pseudo-norm of the second-order difference, called as the lp(2d) pseudo-norm, of the signal. The optimization involved is carried out using a sequential conjugate-gradient algorithm. The dictionary learning algorithm uses an iterative procedure wherein a signal reconstruction and a dictionary update steps are repeated until a convergence criterion is satisfied. The signal reconstruction step is implemented by using the proposed signal reconstruction algorithm and the dictionary update step is implemented by using the linear least-squares method. Extensive simulation results demonstrate that the proposed algorithm yields improved reconstruction performance for temporally correlated ECG signals relative to the state-of-the-art lp(1d)-regularized least-squares and Bayesian learning based algorithms. Also for a known class of signals, the reconstruction performance of the proposed algorithm can be improved by applying it in conjunction with a dictionary obtained using the proposed dictionary learning algorithm.

  8. The Effect of Bilingual Term List Size on Dictionary-Based Cross-Language Information Retrieval

    DTIC Science & Technology

    2003-02-01

    FEB 2003 2. REPORT TYPE 3. DATES COVERED 00-00-2003 to 00-00-2003 4. TITLE AND SUBTITLE The Effect of Bilingual Term List Size on Dictionary ...298 (Rev. 8-98) Prescribed by ANSI Std Z39-18 The Effect of Bilingual Term List Size on Dictionary -Based Cross-Language Information Retrieval Dina...are extensively used as a resource for dictionary -based Cross-Language Information Retrieval (CLIR), in which the goal is to find documents written

  9. Travel time tomography with local image regularization by sparsity constrained dictionary learning

    NASA Astrophysics Data System (ADS)

    Bianco, M.; Gerstoft, P.

    2017-12-01

    We propose a regularization approach for 2D seismic travel time tomography which models small rectangular groups of slowness pixels, within an overall or `global' slowness image, as sparse linear combinations of atoms from a dictionary. The groups of slowness pixels are referred to as patches and a dictionary corresponds to a collection of functions or `atoms' describing the slowness in each patch. These functions could for example be wavelets.The patch regularization is incorporated into the global slowness image. The global image models the broad features, while the local patch images incorporate prior information from the dictionary. Further, high resolution slowness within patches is permitted if the travel times from the global estimates support it. The proposed approach is formulated as an algorithm, which is repeated until convergence is achieved: 1) From travel times, find the global slowness image with a minimum energy constraint on the pixel variance relative to a reference. 2) Find the patch level solutions to fit the global estimate as a sparse linear combination of dictionary atoms.3) Update the reference as the weighted average of the patch level solutions.This approach relies on the redundancy of the patches in the seismic image. Redundancy means that the patches are repetitions of a finite number of patterns, which are described by the dictionary atoms. Redundancy in the earth's structure was demonstrated in previous works in seismics where dictionaries of wavelet functions regularized inversion. We further exploit redundancy of the patches by using dictionary learning algorithms, a form of unsupervised machine learning, to estimate optimal dictionaries from the data in parallel with the inversion. We demonstrate our approach on densely, but irregularly sampled synthetic seismic images.

  10. Model-based semantic dictionaries for medical language understanding.

    PubMed Central

    Rassinoux, A. M.; Baud, R. H.; Ruch, P.; Trombert-Paviot, B.; Rodrigues, J. M.

    1999-01-01

    Semantic dictionaries are emerging as a major cornerstone towards achieving sound natural language understanding. Indeed, they constitute the main bridge between words and conceptual entities that reflect their meanings. Nowadays, more and more wide-coverage lexical dictionaries are electronically available in the public domain. However, associating a semantic content with lexical entries is not a straightforward task as it is subordinate to the existence of a fine-grained concept model of the treated domain. This paper presents the benefits and pitfalls in building and maintaining multilingual dictionaries, the semantics of which is directly established on an existing concept model. Concrete cases, handled through the GALEN-IN-USE project, illustrate the use of such semantic dictionaries for the analysis and generation of multilingual surgical procedures. PMID:10566333

  11. Talking Shop with Moira Runcie.

    ERIC Educational Resources Information Center

    Bowers, Rogers

    1998-01-01

    Presents an interview with Moira Runcie, Editorial Director for ELT (English Language Teaching) dictionaries at Oxford University Press. The interview focuses on the work of A.S. Hornby in creating the first learners dictionary of English and shows how modern dictionaries draw on his work. (Author/JL)

  12. Cross-label Suppression: a Discriminative and Fast Dictionary Learning with Group Regularization.

    PubMed

    Wang, Xiudong; Gu, Yuantao

    2017-05-10

    This paper addresses image classification through learning a compact and discriminative dictionary efficiently. Given a structured dictionary with each atom (columns in the dictionary matrix) related to some label, we propose crosslabel suppression constraint to enlarge the difference among representations for different classes. Meanwhile, we introduce group regularization to enforce representations to preserve label properties of original samples, meaning the representations for the same class are encouraged to be similar. Upon the cross-label suppression, we don't resort to frequently-used `0-norm or `1- norm for coding, and obtain computational efficiency without losing the discriminative power for categorization. Moreover, two simple classification schemes are also developed to take full advantage of the learnt dictionary. Extensive experiments on six data sets including face recognition, object categorization, scene classification, texture recognition and sport action categorization are conducted, and the results show that the proposed approach can outperform lots of recently presented dictionary algorithms on both recognition accuracy and computational efficiency.

  13. A Locality-Constrained and Label Embedding Dictionary Learning Algorithm for Image Classification.

    PubMed

    Zhengming Li; Zhihui Lai; Yong Xu; Jian Yang; Zhang, David

    2017-02-01

    Locality and label information of training samples play an important role in image classification. However, previous dictionary learning algorithms do not take the locality and label information of atoms into account together in the learning process, and thus their performance is limited. In this paper, a discriminative dictionary learning algorithm, called the locality-constrained and label embedding dictionary learning (LCLE-DL) algorithm, was proposed for image classification. First, the locality information was preserved using the graph Laplacian matrix of the learned dictionary instead of the conventional one derived from the training samples. Then, the label embedding term was constructed using the label information of atoms instead of the classification error term, which contained discriminating information of the learned dictionary. The optimal coding coefficients derived by the locality-based and label-based reconstruction were effective for image classification. Experimental results demonstrated that the LCLE-DL algorithm can achieve better performance than some state-of-the-art algorithms.

  14. Low-rank and Adaptive Sparse Signal (LASSI) Models for Highly Accelerated Dynamic Imaging

    PubMed Central

    Ravishankar, Saiprasad; Moore, Brian E.; Nadakuditi, Raj Rao; Fessler, Jeffrey A.

    2017-01-01

    Sparsity-based approaches have been popular in many applications in image processing and imaging. Compressed sensing exploits the sparsity of images in a transform domain or dictionary to improve image recovery from undersampled measurements. In the context of inverse problems in dynamic imaging, recent research has demonstrated the promise of sparsity and low-rank techniques. For example, the patches of the underlying data are modeled as sparse in an adaptive dictionary domain, and the resulting image and dictionary estimation from undersampled measurements is called dictionary-blind compressed sensing, or the dynamic image sequence is modeled as a sum of low-rank and sparse (in some transform domain) components (L+S model) that are estimated from limited measurements. In this work, we investigate a data-adaptive extension of the L+S model, dubbed LASSI, where the temporal image sequence is decomposed into a low-rank component and a component whose spatiotemporal (3D) patches are sparse in some adaptive dictionary domain. We investigate various formulations and efficient methods for jointly estimating the underlying dynamic signal components and the spatiotemporal dictionary from limited measurements. We also obtain efficient sparsity penalized dictionary-blind compressed sensing methods as special cases of our LASSI approaches. Our numerical experiments demonstrate the promising performance of LASSI schemes for dynamic magnetic resonance image reconstruction from limited k-t space data compared to recent methods such as k-t SLR and L+S, and compared to the proposed dictionary-blind compressed sensing method. PMID:28092528

  15. Low-Rank and Adaptive Sparse Signal (LASSI) Models for Highly Accelerated Dynamic Imaging.

    PubMed

    Ravishankar, Saiprasad; Moore, Brian E; Nadakuditi, Raj Rao; Fessler, Jeffrey A

    2017-05-01

    Sparsity-based approaches have been popular in many applications in image processing and imaging. Compressed sensing exploits the sparsity of images in a transform domain or dictionary to improve image recovery fromundersampledmeasurements. In the context of inverse problems in dynamic imaging, recent research has demonstrated the promise of sparsity and low-rank techniques. For example, the patches of the underlying data are modeled as sparse in an adaptive dictionary domain, and the resulting image and dictionary estimation from undersampled measurements is called dictionary-blind compressed sensing, or the dynamic image sequence is modeled as a sum of low-rank and sparse (in some transform domain) components (L+S model) that are estimated from limited measurements. In this work, we investigate a data-adaptive extension of the L+S model, dubbed LASSI, where the temporal image sequence is decomposed into a low-rank component and a component whose spatiotemporal (3D) patches are sparse in some adaptive dictionary domain. We investigate various formulations and efficient methods for jointly estimating the underlying dynamic signal components and the spatiotemporal dictionary from limited measurements. We also obtain efficient sparsity penalized dictionary-blind compressed sensing methods as special cases of our LASSI approaches. Our numerical experiments demonstrate the promising performance of LASSI schemes for dynamicmagnetic resonance image reconstruction from limited k-t space data compared to recent methods such as k-t SLR and L+S, and compared to the proposed dictionary-blind compressed sensing method.

  16. The Database Query Support Processor (QSP)

    NASA Technical Reports Server (NTRS)

    1993-01-01

    The number and diversity of databases available to users continues to increase dramatically. Currently, the trend is towards decentralized, client server architectures that (on the surface) are less expensive to acquire, operate, and maintain than information architectures based on centralized, monolithic mainframes. The database query support processor (QSP) effort evaluates the performance of a network level, heterogeneous database access capability. Air Force Material Command's Rome Laboratory has developed an approach, based on ANSI standard X3.138 - 1988, 'The Information Resource Dictionary System (IRDS)' to seamless access to heterogeneous databases based on extensions to data dictionary technology. To successfully query a decentralized information system, users must know what data are available from which source, or have the knowledge and system privileges necessary to find out this information. Privacy and security considerations prohibit free and open access to every information system in every network. Even in completely open systems, time required to locate relevant data (in systems of any appreciable size) would be better spent analyzing the data, assuming the original question was not forgotten. Extensions to data dictionary technology have the potential to more fully automate the search and retrieval for relevant data in a decentralized environment. Substantial amounts of time and money could be saved by not having to teach users what data resides in which systems and how to access each of those systems. Information describing data and how to get it could be removed from the application and placed in a dedicated repository where it belongs. The result simplified applications that are less brittle and less expensive to build and maintain. Software technology providing the required functionality is off the shelf. The key difficulty is in defining the metadata required to support the process. The database query support processor effort will provide quantitative data on the amount of effort required to implement an extended data dictionary at the network level, add new systems, adapt to changing user needs, and provide sound estimates on operations and maintenance costs and savings.

  17. 49 CFR Appendix B to Part 604 - Reasons for Removal

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... honest mistake. Black's Law Dictionary, Revised Fourth Edition, West Publishing Company, St. Paul, Minn... performing it. Black's Law Dictionary, Revised Fourth Edition, West Publishing Company, St. Paul, Minn., 1968... force. In addition, no other policy of insurance has taken its place. Black's Law Dictionary, Revised...

  18. 49 CFR Appendix B to Part 604 - Reasons for Removal

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... honest mistake. Black's Law Dictionary, Revised Fourth Edition, West Publishing Company, St. Paul, Minn... performing it. Black's Law Dictionary, Revised Fourth Edition, West Publishing Company, St. Paul, Minn., 1968... force. In addition, no other policy of insurance has taken its place. Black's Law Dictionary, Revised...

  19. 49 CFR Appendix B to Part 604 - Reasons for Removal

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... honest mistake. Black's Law Dictionary, Revised Fourth Edition, West Publishing Company, St. Paul, Minn... performing it. Black's Law Dictionary, Revised Fourth Edition, West Publishing Company, St. Paul, Minn., 1968... force. In addition, no other policy of insurance has taken its place. Black's Law Dictionary, Revised...

  20. Ahtna Athabaskan Dictionary.

    ERIC Educational Resources Information Center

    Kari, James, Ed.

    This dictionary of Ahtna, a dialect of the Athabaskan language family, is the first to integrate all morphemes into a single alphabetically arranged section of main entries, with verbs arranged according to a theory of Ahtna (and Athabascan) verb theme categories. An introductory section details dictionary format conventions used, presents a brief…

  1. A Novel Approach to Creating Disambiguated Multilingual Dictionaries

    ERIC Educational Resources Information Center

    Boguslavsky, Igor; Cardenosa, Jesus; Gallardo, Carolina

    2009-01-01

    Multilingual lexicons are needed in various applications, such as cross-lingual information retrieval, machine translation, and some others. Often, these applications suffer from the ambiguity of dictionary items, especially when an intermediate natural language is involved in the process of the dictionary construction, since this language adds…

  2. 49 CFR Appendix B to Part 604 - Reasons for Removal

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... honest mistake. Black's Law Dictionary, Revised Fourth Edition, West Publishing Company, St. Paul, Minn... performing it. Black's Law Dictionary, Revised Fourth Edition, West Publishing Company, St. Paul, Minn., 1968... force. In addition, no other policy of insurance has taken its place. Black's Law Dictionary, Revised...

  3. The Lexicographic Treatment of Color Terms

    ERIC Educational Resources Information Center

    Williams, Krista

    2014-01-01

    This dissertation explores the main question, "What are the issues involved in the definition and translation of color terms in dictionaries?" To answer this question, I examined color term definitions in monolingual dictionaries of French and English, and color term translations in bilingual dictionaries of French paired with nine…

  4. 21 CFR 701.3 - Designation of ingredients.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ....) Cosmetic Ingredient Dictionary, Second Ed., 1977 (available from the Cosmetic, Toiletry and Fragrance... revised monographs are published in supplements to this dictionary edition by July 18, 1980. Acid Black 2.../federal_register/code_of_federal_regulations/ibr_locations.html. (v) USAN and the USP dictionary of drug...

  5. From Afar to Zulu: A Dictionary of African Cultures.

    ERIC Educational Resources Information Center

    Haskins, Jim; Biondi, Joann

    This resource provides information on over 30 of Africa's most populous and well-known ethnic groups. The text concisely describes the history, traditions, environment, social structure, religion, and daily lifestyles of these diverse cultures. Each entry opens with a map outlining the area populated by the group and a list of key data regarding…

  6. Teachers' Views on Course Supervision Competencies of Secondary School Managers

    ERIC Educational Resources Information Center

    Bayram, Arslan

    2016-01-01

    The definition of supervision in the dictionary is "to look after", "to direct," "to watch over," and "to check." It is usually seen as a tool to manage the teacher. Understanding of supervision in education has shown a change and progress in line with approaches and theories regarding management. The…

  7. Measuring Job Content: Skills, Technology, and Management Practices. Discussion Paper No. 1357-08

    ERIC Educational Resources Information Center

    Handel, Michael J.

    2008-01-01

    The conceptualization and measurement of key job characteristics has not changed greatly for most social scientists since the Dictionary of Occupational Titles and Quality of Employment surveys were created, despite their recognized limitations. However, debates over the roles of job skill requirements, technology, and new management practices in…

  8. The Use of Corpus Examples for Language Comprehension and Production

    ERIC Educational Resources Information Center

    Frankenberg-Garcia, Ana

    2014-01-01

    One of the many new features of English language learners' dictionaries derived from the technological developments that have taken place over recent decades is the presence of corpus-based examples to illustrate the use of words in context. However, empirical studies have generally not been able to produce conclusive evidence about their…

  9. Encyclopedic Dictionary of Applied Linguistics: A Handbook for Language Teaching.

    ERIC Educational Resources Information Center

    Johnson, Keith, Ed.; Johnson, Helen, Ed.

    This volume provides an up-to-date and comprehensive reference guide to the key concepts, ideas, movements, and trends of applied linguistics for language teaching. With over 300 entries of varying length, the volume includes essential coverage of language, language learning, and language teaching. Written in an accessible style, the entries draw…

  10. The GLAS Standard Data Products Specification-Data Dictionary, Version 1.0. Volume 15

    NASA Technical Reports Server (NTRS)

    Lee, Jeffrey E.

    2013-01-01

    The Geoscience Laser Altimeter System (GLAS) is the primary instrument for the ICESat (Ice, Cloud and Land Elevation Satellite) laser altimetry mission. ICESat was the benchmark Earth Observing System (EOS) mission for measuring ice sheet mass balance, cloud and aerosol heights, as well as land topography and vegetation characteristics. From 2003 to 2009, the ICESat mission provided multi-year elevation data needed to determine ice sheet mass balance as well as cloud property information, especially for stratospheric clouds common over polar areas. It also provided topography and vegetation data around the globe, in addition to the polar-specific coverage over the Greenland and Antarctic ice sheets.This document contains the data dictionary for the GLAS standard data products. It details the parameters present on GLAS standard data products. Each parameter is defined with a short name, a long name, units on product, type of variable, a long description and products that contain it. The term standard data products refers to those EOS instrument data that are routinely generated for public distribution. These products are distributed by the National Snow and Ice Data Center (NSDIC).

  11. Dictionary of Multicultural Education.

    ERIC Educational Resources Information Center

    Grant, Carl A., Ed.; Ladson-Billings, Gloria, Ed.

    The focus of this dictionary is the meanings and perspectives of various terms that are used in multicultural education. Contributors have often addressed the literal meanings of words and terms as well as contextual meanings and examples that helped create those meanings. Like other dictionaries, this one is arranged alphabetically, but it goes…

  12. Sparse Representation Based Classification with Structure Preserving Dimension Reduction

    DTIC Science & Technology

    2014-03-13

    dictionary learning [39] used stochastic approximations to update dictionary with a large data set. Laplacian score dictionary ( LSD ) [58], which is based on...vol. 4. 2003. p. 864–7. 47. Shaw B, Jebara T. Structure preserving embedding. In: The 26th annual international conference on machine learning, ICML

  13. Chinese-English Aviation and Space Dictionary.

    ERIC Educational Resources Information Center

    Air Force Systems Command, Wright-Patterson AFB, OH. Foreign Technology Div.

    The Aviation and Space Dictionary is the second of a series of Chinese-English technical dictionaries under preparation by the Foreign Technology Division, United States Air Force Systems Command. The purpose of the series is to provide rapid reference tools for translators, abstracters, and research analysts concerned with scientific and…

  14. EFL Students' "Yahoo!" Online Bilingual Dictionary Use Behavior

    ERIC Educational Resources Information Center

    Tseng, Fan-ping

    2009-01-01

    This study examined 38 EFL senior high school students' "Yahoo!" online dictionary look-up behavior. In a language laboratory, the participants read an article on a reading sheet, underlined any words they did not know, looked up their unknown words in "Yahoo!" online bilingual dictionary, and wrote down the definitions of…

  15. Chinese-Cantonese Dictionary of Common Chinese-Cantonese Characters.

    ERIC Educational Resources Information Center

    Defense Language Inst., Washington, DC.

    This dictionary contains 1,500 Chinese-Cantonese characters (selected from three frequency lists), and more than 6,000 Chinese-Cantonese terms (selected from three Cantonese-English dictionaries). The characters are arranged alphabetically according to the U.S. Army Language School System of Romanization, which is described in the…

  16. Chinese-English Electronics and Telecommunications Dictionary, Vol. 2.

    ERIC Educational Resources Information Center

    Air Force Systems Command, Wright-Patterson AFB, OH. Foreign Technology Div.

    This is the second volume of the Electronics and Telecommunications Dictionary, the third of the series of Chinese-English technical dictionaries under preparation by the Foreign Technology Division, United States Air Force Systems Command. The purpose of the series is to provide rapid reference tools for translators, abstracters, and research…

  17. Chinese-English Electronics and Telecommunications Dictionary. Vol. 1.

    ERIC Educational Resources Information Center

    Air Force Systems Command, Wright-Patterson AFB, OH. Foreign Technology Div.

    This is the first volume of the Electronics and Telecommunications Dictionary, the third of the series of Chinese-English technical dictionaries under preparation by the Foreign Technology Division, United States Air Force Systems Command. The purpose of the series is to provide rapid reference tools for translators, abstracters, and research…

  18. Binukid Dictionary.

    ERIC Educational Resources Information Center

    Otanes, Fe T., Ed.; Wrigglesworth, Hazel

    1992-01-01

    The dictionary of Binukid, a language spoken in the Bukidnon province of the Philippines, is intended as a tool for students of Binukid and for native Binukid-speakers interested in learning English. A single dialect was chosen for this work. The dictionary is introduced by notes on Binukid grammar, including basic information about phonology and…

  19. Learning the Language of Difference: The Dictionary in the High School.

    ERIC Educational Resources Information Center

    Willinsky, John

    1987-01-01

    Reports on dictionaries' power to misrepresent gender. Examines the definitions of three terms (clitoris, penis, and vagina) in eight leading high school dictionaries. Concludes that the absence of certain female gender-related terms represents another instance of institutionalized silence about the experience of women. (MM)

  20. 75 FR 22805 - Federal Travel Regulation; Relocation Allowances; Standard Data Dictionary for Collection of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-04-30

    ... GENERAL SERVICES ADMINISTRATION [Proposed GSA Bulletin FTR 10-XXX; Docket 2010-0009; Sequence 1] Federal Travel Regulation; Relocation Allowances; Standard Data Dictionary for Collection of Transaction... GSA is posting online a proposed FTR bulletin that contains the data dictionary that large Federal...

  1. Defining Moments \\ di-'fi-ning 'mo-mnts \\

    ERIC Educational Resources Information Center

    Kilman, Carrie

    2012-01-01

    Children encounter new words every day. Although dictionaries designed for young readers can help students explore and experiment with language, it turns out many mainstream children's dictionaries fail to accurately describe the world in which many students live. The challenges to children's dictionary publishers can be steep. First, there is the…

  2. Getting the Most out of the Dictionary

    ERIC Educational Resources Information Center

    Marckwardt, Albert H.

    2012-01-01

    The usefulness of the dictionary as a reliable source of information for word meanings, spelling, and pronunciation is widely recognized. But even in these obvious matters, the information that the dictionary has to offer is not always accurately interpreted. With respect to pronunciation there seem to be two general pitfalls: (1) the…

  3. A dictionary of commonly used terms and terminologies in nonwovens

    USDA-ARS?s Scientific Manuscript database

    A need for a comprehensive dictionary of cotton was assessed by the International Cotton Advisory Committee (ICAC), Washington, DC. The ICAC has selected the topics (from the fiber to fabric) to be covered in the dictionary. The ICAC has invited researchers/scientists from across the globe, to compi...

  4. 78 FR 68343 - Homeownership Counseling Organizations Lists Interpretive Rule

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-14

    ... into their definitional meanings, according to the Data Dictionary,\\7\\ to ensure clarity. This will be... dictionary for the Field ``Services'' can be found at http://data.hud.gov/Housing_Counselor/getServices , and a data dictionary for ``Languages'' can be found at http://data.hud.gov/Housing_Counselor/get...

  5. Measurement of Negativity Bias in Personal Narratives Using Corpus-Based Emotion Dictionaries

    ERIC Educational Resources Information Center

    Cohen, Shuki J.

    2011-01-01

    This study presents a novel methodology for the measurement of negativity bias using positive and negative dictionaries of emotion words applied to autobiographical narratives. At odds with the cognitive theory of mood dysregulation, previous text-analytical studies have failed to find significant correlation between emotion dictionaries and…

  6. A Proposal To Develop the Axiological Aspect in Onomasiological Dictionaries.

    ERIC Educational Resources Information Center

    Felices Lago, Angel Miguel

    It is argued that English dictionaries currently provide evaluative information in addition to descriptive information about the words they contain, and that this aspect of dictionaries should be developed and expanded on. First, the historical background and distribution of the axiological parameter in English-language onomasiological…

  7. Earliest English Definitions of Anaisthesia and Anaesthesia.

    PubMed

    Haridas, Rajesh P

    2017-11-01

    The earliest identified English definition of the word anaisthesia was discovered in the first edition (1684) of A Physical Dictionary, an English translation of Steven Blankaart's medical dictionary, Lexicon Medicum Graeco-Latinum. This definition was almost certainly the source of the definition of anaesthesia which appeared in Dictionarium Anglo-Britannicum (1708), a general-purpose English dictionary compiled by the lexicographer John Kersey. The words anaisthesia and anaesthesia have not been identified in English medical or surgical publications that antedate the earliest English dictionaries in which they are known to have been defined.

  8. MR fingerprinting reconstruction with Kalman filter.

    PubMed

    Zhang, Xiaodi; Zhou, Zechen; Chen, Shiyang; Chen, Shuo; Li, Rui; Hu, Xiaoping

    2017-09-01

    Magnetic resonance fingerprinting (MR fingerprinting or MRF) is a newly introduced quantitative magnetic resonance imaging technique, which enables simultaneous multi-parameter mapping in a single acquisition with improved time efficiency. The current MRF reconstruction method is based on dictionary matching, which may be limited by the discrete and finite nature of the dictionary and the computational cost associated with dictionary construction, storage and matching. In this paper, we describe a reconstruction method based on Kalman filter for MRF, which avoids the use of dictionary to obtain continuous MR parameter measurements. With this Kalman filter framework, the Bloch equation of inversion-recovery balanced steady state free-precession (IR-bSSFP) MRF sequence was derived to predict signal evolution, and acquired signal was entered to update the prediction. The algorithm can gradually estimate the accurate MR parameters during the recursive calculation. Single pixel and numeric brain phantom simulation were implemented with Kalman filter and the results were compared with those from dictionary matching reconstruction algorithm to demonstrate the feasibility and assess the performance of Kalman filter algorithm. The results demonstrated that Kalman filter algorithm is applicable for MRF reconstruction, eliminating the need for a pre-define dictionary and obtaining continuous MR parameter in contrast to the dictionary matching algorithm. Copyright © 2017 Elsevier Inc. All rights reserved.

  9. Weakly Supervised Dictionary Learning

    NASA Astrophysics Data System (ADS)

    You, Zeyu; Raich, Raviv; Fern, Xiaoli Z.; Kim, Jinsub

    2018-05-01

    We present a probabilistic modeling and inference framework for discriminative analysis dictionary learning under a weak supervision setting. Dictionary learning approaches have been widely used for tasks such as low-level signal denoising and restoration as well as high-level classification tasks, which can be applied to audio and image analysis. Synthesis dictionary learning aims at jointly learning a dictionary and corresponding sparse coefficients to provide accurate data representation. This approach is useful for denoising and signal restoration, but may lead to sub-optimal classification performance. By contrast, analysis dictionary learning provides a transform that maps data to a sparse discriminative representation suitable for classification. We consider the problem of analysis dictionary learning for time-series data under a weak supervision setting in which signals are assigned with a global label instead of an instantaneous label signal. We propose a discriminative probabilistic model that incorporates both label information and sparsity constraints on the underlying latent instantaneous label signal using cardinality control. We present the expectation maximization (EM) procedure for maximum likelihood estimation (MLE) of the proposed model. To facilitate a computationally efficient E-step, we propose both a chain and a novel tree graph reformulation of the graphical model. The performance of the proposed model is demonstrated on both synthetic and real-world data.

  10. Intelligent Diagnosis Method for Rotating Machinery Using Dictionary Learning and Singular Value Decomposition

    PubMed Central

    Han, Te; Jiang, Dongxiang; Zhang, Xiaochen; Sun, Yankui

    2017-01-01

    Rotating machinery is widely used in industrial applications. With the trend towards more precise and more critical operating conditions, mechanical failures may easily occur. Condition monitoring and fault diagnosis (CMFD) technology is an effective tool to enhance the reliability and security of rotating machinery. In this paper, an intelligent fault diagnosis method based on dictionary learning and singular value decomposition (SVD) is proposed. First, the dictionary learning scheme is capable of generating an adaptive dictionary whose atoms reveal the underlying structure of raw signals. Essentially, dictionary learning is employed as an adaptive feature extraction method regardless of any prior knowledge. Second, the singular value sequence of learned dictionary matrix is served to extract feature vector. Generally, since the vector is of high dimensionality, a simple and practical principal component analysis (PCA) is applied to reduce dimensionality. Finally, the K-nearest neighbor (KNN) algorithm is adopted for identification and classification of fault patterns automatically. Two experimental case studies are investigated to corroborate the effectiveness of the proposed method in intelligent diagnosis of rotating machinery faults. The comparison analysis validates that the dictionary learning-based matrix construction approach outperforms the mode decomposition-based methods in terms of capacity and adaptability for feature extraction. PMID:28346385

  11. Supervised dictionary learning for inferring concurrent brain networks.

    PubMed

    Zhao, Shijie; Han, Junwei; Lv, Jinglei; Jiang, Xi; Hu, Xintao; Zhao, Yu; Ge, Bao; Guo, Lei; Liu, Tianming

    2015-10-01

    Task-based fMRI (tfMRI) has been widely used to explore functional brain networks via predefined stimulus paradigm in the fMRI scan. Traditionally, the general linear model (GLM) has been a dominant approach to detect task-evoked networks. However, GLM focuses on task-evoked or event-evoked brain responses and possibly ignores the intrinsic brain functions. In comparison, dictionary learning and sparse coding methods have attracted much attention recently, and these methods have shown the promise of automatically and systematically decomposing fMRI signals into meaningful task-evoked and intrinsic concurrent networks. Nevertheless, two notable limitations of current data-driven dictionary learning method are that the prior knowledge of task paradigm is not sufficiently utilized and that the establishment of correspondences among dictionary atoms in different brains have been challenging. In this paper, we propose a novel supervised dictionary learning and sparse coding method for inferring functional networks from tfMRI data, which takes both of the advantages of model-driven method and data-driven method. The basic idea is to fix the task stimulus curves as predefined model-driven dictionary atoms and only optimize the other portion of data-driven dictionary atoms. Application of this novel methodology on the publicly available human connectome project (HCP) tfMRI datasets has achieved promising results.

  12. Blind compressive sensing dynamic MRI

    PubMed Central

    Lingala, Sajan Goud; Jacob, Mathews

    2013-01-01

    We propose a novel blind compressive sensing (BCS) frame work to recover dynamic magnetic resonance images from undersampled measurements. This scheme models the dynamic signal as a sparse linear combination of temporal basis functions, chosen from a large dictionary. In contrast to classical compressed sensing, the BCS scheme simultaneously estimates the dictionary and the sparse coefficients from the undersampled measurements. Apart from the sparsity of the coefficients, the key difference of the BCS scheme with current low rank methods is the non-orthogonal nature of the dictionary basis functions. Since the number of degrees of freedom of the BCS model is smaller than that of the low-rank methods, it provides improved reconstructions at high acceleration rates. We formulate the reconstruction as a constrained optimization problem; the objective function is the linear combination of a data consistency term and sparsity promoting ℓ1 prior of the coefficients. The Frobenius norm dictionary constraint is used to avoid scale ambiguity. We introduce a simple and efficient majorize-minimize algorithm, which decouples the original criterion into three simpler sub problems. An alternating minimization strategy is used, where we cycle through the minimization of three simpler problems. This algorithm is seen to be considerably faster than approaches that alternates between sparse coding and dictionary estimation, as well as the extension of K-SVD dictionary learning scheme. The use of the ℓ1 penalty and Frobenius norm dictionary constraint enables the attenuation of insignificant basis functions compared to the ℓ0 norm and column norm constraint assumed in most dictionary learning algorithms; this is especially important since the number of basis functions that can be reliably estimated is restricted by the available measurements. We also observe that the proposed scheme is more robust to local minima compared to K-SVD method, which relies on greedy sparse coding. Our phase transition experiments demonstrate that the BCS scheme provides much better recovery rates than classical Fourier-based CS schemes, while being only marginally worse than the dictionary aware setting. Since the overhead in additionally estimating the dictionary is low, this method can be very useful in dynamic MRI applications, where the signal is not sparse in known dictionaries. We demonstrate the utility of the BCS scheme in accelerating contrast enhanced dynamic data. We observe superior reconstruction performance with the BCS scheme in comparison to existing low rank and compressed sensing schemes. PMID:23542951

  13. A dictionary learning approach for Poisson image deblurring.

    PubMed

    Ma, Liyan; Moisan, Lionel; Yu, Jian; Zeng, Tieyong

    2013-07-01

    The restoration of images corrupted by blur and Poisson noise is a key issue in medical and biological image processing. While most existing methods are based on variational models, generally derived from a maximum a posteriori (MAP) formulation, recently sparse representations of images have shown to be efficient approaches for image recovery. Following this idea, we propose in this paper a model containing three terms: a patch-based sparse representation prior over a learned dictionary, the pixel-based total variation regularization term and a data-fidelity term capturing the statistics of Poisson noise. The resulting optimization problem can be solved by an alternating minimization technique combined with variable splitting. Extensive experimental results suggest that in terms of visual quality, peak signal-to-noise ratio value and the method noise, the proposed algorithm outperforms state-of-the-art methods.

  14. Efficient convolutional sparse coding

    DOEpatents

    Wohlberg, Brendt

    2017-06-20

    Computationally efficient algorithms may be applied for fast dictionary learning solving the convolutional sparse coding problem in the Fourier domain. More specifically, efficient convolutional sparse coding may be derived within an alternating direction method of multipliers (ADMM) framework that utilizes fast Fourier transforms (FFT) to solve the main linear system in the frequency domain. Such algorithms may enable a significant reduction in computational cost over conventional approaches by implementing a linear solver for the most critical and computationally expensive component of the conventional iterative algorithm. The theoretical computational cost of the algorithm may be reduced from O(M.sup.3N) to O(MN log N), where N is the dimensionality of the data and M is the number of elements in the dictionary. This significant improvement in efficiency may greatly increase the range of problems that can practically be addressed via convolutional sparse representations.

  15. Improved object optimal synthetic description, modeling, learning, and discrimination by GEOGINE computational kernel

    NASA Astrophysics Data System (ADS)

    Fiorini, Rodolfo A.; Dacquino, Gianfranco

    2005-03-01

    GEOGINE (GEOmetrical enGINE), a state-of-the-art OMG (Ontological Model Generator) based on n-D Tensor Invariants for n-Dimensional shape/texture optimal synthetic representation, description and learning, was presented in previous conferences elsewhere recently. Improved computational algorithms based on the computational invariant theory of finite groups in Euclidean space and a demo application is presented. Progressive model automatic generation is discussed. GEOGINE can be used as an efficient computational kernel for fast reliable application development and delivery in advanced biomedical engineering, biometric, intelligent computing, target recognition, content image retrieval, data mining technological areas mainly. Ontology can be regarded as a logical theory accounting for the intended meaning of a formal dictionary, i.e., its ontological commitment to a particular conceptualization of the world object. According to this approach, "n-D Tensor Calculus" can be considered a "Formal Language" to reliably compute optimized "n-Dimensional Tensor Invariants" as specific object "invariant parameter and attribute words" for automated n-Dimensional shape/texture optimal synthetic object description by incremental model generation. The class of those "invariant parameter and attribute words" can be thought as a specific "Formal Vocabulary" learned from a "Generalized Formal Dictionary" of the "Computational Tensor Invariants" language. Even object chromatic attributes can be effectively and reliably computed from object geometric parameters into robust colour shape invariant characteristics. As a matter of fact, any highly sophisticated application needing effective, robust object geometric/colour invariant attribute capture and parameterization features, for reliable automated object learning and discrimination can deeply benefit from GEOGINE progressive automated model generation computational kernel performance. Main operational advantages over previous, similar approaches are: 1) Progressive Automated Invariant Model Generation, 2) Invariant Minimal Complete Description Set for computational efficiency, 3) Arbitrary Model Precision for robust object description and identification.

  16. Induced lexico-syntactic patterns improve information extraction from online medical forums.

    PubMed

    Gupta, Sonal; MacLean, Diana L; Heer, Jeffrey; Manning, Christopher D

    2014-01-01

    To reliably extract two entity types, symptoms and conditions (SCs), and drugs and treatments (DTs), from patient-authored text (PAT) by learning lexico-syntactic patterns from data annotated with seed dictionaries. Despite the increasing quantity of PAT (eg, online discussion threads), tools for identifying medical entities in PAT are limited. When applied to PAT, existing tools either fail to identify specific entity types or perform poorly. Identification of SC and DT terms in PAT would enable exploration of efficacy and side effects for not only pharmaceutical drugs, but also for home remedies and components of daily care. We use SC and DT term dictionaries compiled from online sources to label several discussion forums from MedHelp (http://www.medhelp.org). We then iteratively induce lexico-syntactic patterns corresponding strongly to each entity type to extract new SC and DT terms. Our system is able to extract symptom descriptions and treatments absent from our original dictionaries, such as 'LADA', 'stabbing pain', and 'cinnamon pills'. Our system extracts DT terms with 58-70% F1 score and SC terms with 66-76% F1 score on two forums from MedHelp. We show improvements over MetaMap, OBA, a conditional random field-based classifier, and a previous pattern learning approach. Our entity extractor based on lexico-syntactic patterns is a successful and preferable technique for identifying specific entity types in PAT. To the best of our knowledge, this is the first paper to extract SC and DT entities from PAT. We exhibit learning of informal terms often used in PAT but missing from typical dictionaries. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  17. Dictionary Learning for Data Recovery in Positron Emission Tomography

    PubMed Central

    Valiollahzadeh, SeyyedMajid; Clark, John W.; Mawlawi, Osama

    2015-01-01

    Compressed sensing (CS) aims to recover images from fewer measurements than that governed by the Nyquist sampling theorem. Most CS methods use analytical predefined sparsifying domains such as Total variation (TV), wavelets, curvelets, and finite transforms to perform this task. In this study, we evaluated the use of dictionary learning (DL) as a sparsifying domain to reconstruct PET images from partially sampled data, and compared the results to the partially and fully sampled image (baseline). A CS model based on learning an adaptive dictionary over image patches was developed to recover missing observations in PET data acquisition. The recovery was done iteratively in two steps: a dictionary learning step and an image reconstruction step. Two experiments were performed to evaluate the proposed CS recovery algorithm: an IEC phantom study and five patient studies. In each case, 11% of the detectors of a GE PET/CT system were removed and the acquired sinogram data were recovered using the proposed DL algorithm. The recovered images (DL) as well as the partially sampled images (with detector gaps) for both experiments were then compared to the baseline. Comparisons were done by calculating RMSE, contrast recovery and SNR in ROIs drawn in the background, and spheres of the phantom as well as patient lesions. For the phantom experiment, the RMSE for the DL recovered images were 5.8% when compared with the baseline images while it was 17.5% for the partially sampled images. In the patients’ studies, RMSE for the DL recovered images were 3.8%, while it was 11.3% for the partially sampled images. Our proposed CS with DL is a good approach to recover partially sampled PET data. This approach has implications towards reducing scanner cost while maintaining accurate PET image quantification. PMID:26161630

  18. Change detection of medical images using dictionary learning techniques and principal component analysis.

    PubMed

    Nika, Varvara; Babyn, Paul; Zhu, Hongmei

    2014-07-01

    Automatic change detection methods for identifying the changes of serial MR images taken at different times are of great interest to radiologists. The majority of existing change detection methods in medical imaging, and those of brain images in particular, include many preprocessing steps and rely mostly on statistical analysis of magnetic resonance imaging (MRI) scans. Although most methods utilize registration software, tissue classification remains a difficult and overwhelming task. Recently, dictionary learning techniques are being used in many areas of image processing, such as image surveillance, face recognition, remote sensing, and medical imaging. We present an improved version of the EigenBlockCD algorithm, named the EigenBlockCD-2. The EigenBlockCD-2 algorithm performs an initial global registration and identifies the changes between serial MR images of the brain. Blocks of pixels from a baseline scan are used to train local dictionaries to detect changes in the follow-up scan. We use PCA to reduce the dimensionality of the local dictionaries and the redundancy of data. Choosing the appropriate distance measure significantly affects the performance of our algorithm. We examine the differences between [Formula: see text] and [Formula: see text] norms as two possible similarity measures in the improved EigenBlockCD-2 algorithm. We show the advantages of the [Formula: see text] norm over the [Formula: see text] norm both theoretically and numerically. We also demonstrate the performance of the new EigenBlockCD-2 algorithm for detecting changes of MR images and compare our results with those provided in the recent literature. Experimental results with both simulated and real MRI scans show that our improved EigenBlockCD-2 algorithm outperforms the previous methods. It detects clinical changes while ignoring the changes due to the patient's position and other acquisition artifacts.

  19. Dictionary learning for data recovery in positron emission tomography

    NASA Astrophysics Data System (ADS)

    Valiollahzadeh, SeyyedMajid; Clark, John W., Jr.; Mawlawi, Osama

    2015-08-01

    Compressed sensing (CS) aims to recover images from fewer measurements than that governed by the Nyquist sampling theorem. Most CS methods use analytical predefined sparsifying domains such as total variation, wavelets, curvelets, and finite transforms to perform this task. In this study, we evaluated the use of dictionary learning (DL) as a sparsifying domain to reconstruct PET images from partially sampled data, and compared the results to the partially and fully sampled image (baseline). A CS model based on learning an adaptive dictionary over image patches was developed to recover missing observations in PET data acquisition. The recovery was done iteratively in two steps: a dictionary learning step and an image reconstruction step. Two experiments were performed to evaluate the proposed CS recovery algorithm: an IEC phantom study and five patient studies. In each case, 11% of the detectors of a GE PET/CT system were removed and the acquired sinogram data were recovered using the proposed DL algorithm. The recovered images (DL) as well as the partially sampled images (with detector gaps) for both experiments were then compared to the baseline. Comparisons were done by calculating RMSE, contrast recovery and SNR in ROIs drawn in the background, and spheres of the phantom as well as patient lesions. For the phantom experiment, the RMSE for the DL recovered images were 5.8% when compared with the baseline images while it was 17.5% for the partially sampled images. In the patients’ studies, RMSE for the DL recovered images were 3.8%, while it was 11.3% for the partially sampled images. Our proposed CS with DL is a good approach to recover partially sampled PET data. This approach has implications toward reducing scanner cost while maintaining accurate PET image quantification.

  20. Archiving Spectral Libraries in the Planetary Data System

    NASA Astrophysics Data System (ADS)

    Slavney, S.; Guinness, E. A.; Scholes, D.; Zastrow, A.

    2017-12-01

    Spectral libraries are becoming popular candidates for archiving in PDS. With the increase in the number of individual investigators funded by programs such as NASA's PDART, the PDS Geosciences Node is receiving many requests for support from proposers wishing to archive various forms of laboratory spectra. To accommodate the need for a standardized approach to archiving spectra, the Geosciences Node has designed the PDS Spectral Library Data Dictionary, which contains PDS4 classes and attributes specifically for labeling spectral data, including a classification scheme for samples. The Reflectance Experiment Laboratory (RELAB) at Brown University, which has long been a provider of spectroscopy equipment and services to the science community, has provided expert input into the design of the dictionary. Together the Geosciences Node and RELAB are preparing the whole of the RELAB Spectral Library, consisting of many thousands of spectra collected over the years, to be archived in PDS. An online interface for searching, displaying, and downloading selected spectra is planned, using the Spectral Library metadata recorded in the PDS labels. The data dictionary and online interface will be extended to include spectral libraries submitted by other data providers. The Spectral Library Data Dictionary is now available from PDS at https://pds.nasa.gov/pds4/schema/released/. It can be used in PDS4 labels for reflectance spectra as well as for Raman, XRF, XRD, LIBS, and other types of spectra. Ancillary data such as images, chemistry, and abundance data are also supported. To help generate PDS4-compliant labels for spectra, the Geosciences Node provides a label generation program called MakeLabels (http://pds-geosciences.wustl.edu/tools/makelabels.html) which creates labels from a template, and which can be used for any kind of PDS4 label. For information, contact the Geosciences Node at geosci@wunder.wustl.edu.

  1. Sparse Coding for N-Gram Feature Extraction and Training for File Fragment Classification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Felix; Quach, Tu-Thach; Wheeler, Jason

    File fragment classification is an important step in the task of file carving in digital forensics. In file carving, files must be reconstructed based on their content as a result of their fragmented storage on disk or in memory. Existing methods for classification of file fragments typically use hand-engineered features such as byte histograms or entropy measures. In this paper, we propose an approach using sparse coding that enables automated feature extraction. Sparse coding, or sparse dictionary learning, is an unsupervised learning algorithm, and is capable of extracting features based simply on how well those features can be used tomore » reconstruct the original data. With respect to file fragments, we learn sparse dictionaries for n-grams, continuous sequences of bytes, of different sizes. These dictionaries may then be used to estimate n-gram frequencies for a given file fragment, but for significantly larger n-gram sizes than are typically found in existing methods which suffer from combinatorial explosion. To demonstrate the capability of our sparse coding approach, we used the resulting features to train standard classifiers such as support vector machines (SVMs) over multiple file types. Experimentally, we achieved significantly better classification results with respect to existing methods, especially when the features were used in supplement to existing hand-engineered features.« less

  2. Sparse Coding for N-Gram Feature Extraction and Training for File Fragment Classification

    DOE PAGES

    Wang, Felix; Quach, Tu-Thach; Wheeler, Jason; ...

    2018-04-05

    File fragment classification is an important step in the task of file carving in digital forensics. In file carving, files must be reconstructed based on their content as a result of their fragmented storage on disk or in memory. Existing methods for classification of file fragments typically use hand-engineered features such as byte histograms or entropy measures. In this paper, we propose an approach using sparse coding that enables automated feature extraction. Sparse coding, or sparse dictionary learning, is an unsupervised learning algorithm, and is capable of extracting features based simply on how well those features can be used tomore » reconstruct the original data. With respect to file fragments, we learn sparse dictionaries for n-grams, continuous sequences of bytes, of different sizes. These dictionaries may then be used to estimate n-gram frequencies for a given file fragment, but for significantly larger n-gram sizes than are typically found in existing methods which suffer from combinatorial explosion. To demonstrate the capability of our sparse coding approach, we used the resulting features to train standard classifiers such as support vector machines (SVMs) over multiple file types. Experimentally, we achieved significantly better classification results with respect to existing methods, especially when the features were used in supplement to existing hand-engineered features.« less

  3. Fast group matching for MR fingerprinting reconstruction.

    PubMed

    Cauley, Stephen F; Setsompop, Kawin; Ma, Dan; Jiang, Yun; Ye, Huihui; Adalsteinsson, Elfar; Griswold, Mark A; Wald, Lawrence L

    2015-08-01

    MR fingerprinting (MRF) is a technique for quantitative tissue mapping using pseudorandom measurements. To estimate tissue properties such as T1 , T2 , proton density, and B0 , the rapidly acquired data are compared against a large dictionary of Bloch simulations. This matching process can be a very computationally demanding portion of MRF reconstruction. We introduce a fast group matching algorithm (GRM) that exploits inherent correlation within MRF dictionaries to create highly clustered groupings of the elements. During matching, a group specific signature is first used to remove poor matching possibilities. Group principal component analysis (PCA) is used to evaluate all remaining tissue types. In vivo 3 Tesla brain data were used to validate the accuracy of our approach. For a trueFISP sequence with over 196,000 dictionary elements, 1000 MRF samples, and image matrix of 128 × 128, GRM was able to map MR parameters within 2s using standard vendor computational resources. This is an order of magnitude faster than global PCA and nearly two orders of magnitude faster than direct matching, with comparable accuracy (1-2% relative error). The proposed GRM method is a highly efficient model reduction technique for MRF matching and should enable clinically relevant reconstruction accuracy and time on standard vendor computational resources. © 2014 Wiley Periodicals, Inc.

  4. Energy and Quality Evaluation for Compressive Sensing of Fetal Electrocardiogram Signals

    PubMed Central

    Da Poian, Giulia; Brandalise, Denis; Bernardini, Riccardo; Rinaldo, Roberto

    2016-01-01

    This manuscript addresses the problem of non-invasive fetal Electrocardiogram (ECG) signal acquisition with low power/low complexity sensors. A sensor architecture using the Compressive Sensing (CS) paradigm is compared to a standard compression scheme using wavelets in terms of energy consumption vs. reconstruction quality, and, more importantly, vs. performance of fetal heart beat detection in the reconstructed signals. We show in this paper that a CS scheme based on reconstruction with an over-complete dictionary has similar reconstruction quality to one based on wavelet compression. We also consider, as a more important figure of merit, the accuracy of fetal beat detection after reconstruction as a function of the sensor power consumption. Experimental results with an actual implementation in a commercial device show that CS allows significant reduction of energy consumption in the sensor node, and that the detection performance is comparable to that obtained from original signals for compression ratios up to about 75%. PMID:28025510

  5. Are you ready for the net generation or the free agent learner?

    PubMed

    Desilets, Lynore D

    2011-08-01

    The newest generation of soon to be health care professionals was raised by Father Google and Mother IM. The world has been a connected place for them their entire lives. They are experts at multitasking. They prefer electronic over print news, dictionaries, and maps; cell phones that do more than make phone calls; e-mail exchanges over face-to-face visits; online payments over checks; and credit cards over cash. In this column, I share some information about technology and these digital natives that I, a digital immigrant, have recently discovered. Copyright 2011, SLACK Incorporated.

  6. Injection Principles from Combustion Studies in a 200-Pound-Thrust Rocket Engine Using Liquid Oxygen and Heptane

    NASA Technical Reports Server (NTRS)

    Heidmann, M. F.; Auble, C. M.

    1955-01-01

    The importance of atomizing and mixing liquid oxygen and heptane was studied in a 200-pound-thrust rocket engine. Ten injector elements were used with both steel and transparent chambers. Characteristic velocity was measured over a range of mixture ratios. Combustion gas-flow and luminosity patterns within the chamber were obtained by photographic methods. The results show that, for efficient combustion, the propellants should be both atomized and mixed. Heptane atomization controlled the combustion rate to a much larger extent than oxygen atomization. Induced mixing, however, was required to complete combustion in the smallest volume. For stable, high-efficiency combustion and smooth engine starts, mixing after atomization was most promising.

  7. Review of "A Dictionary of Global Huayu"

    ERIC Educational Resources Information Center

    Li, Rui

    2016-01-01

    As the first Huayu dictionary published by the Commercial Press, "A Dictionary of Global Huayu" (Chinese Language) did a pioneer work in many aspects. It did expand the influence of Chinese and provided Chinese speaker abroad a valuable reference book for study and communication. Nevertheless, there are still some demerits. First of all,…

  8. Variant Spellings in Modern American Dictionaries.

    ERIC Educational Resources Information Center

    Emery, Donald W.

    A record of how present-day desk dictionaries are recognizing the existence of variant or secondary spellings for many common English words, this reference list can be used by teachers of English and authors of spelling lists. Originally published in 1958, this revised edition uses two dictionaries not in existence then and the revised editions of…

  9. A Survey of Meaning Discrimination in Selected English/Spanish Dictionaries.

    ERIC Educational Resources Information Center

    Powers, Michael D.

    1985-01-01

    Examines the treatment of sense discrimination in eight Spanish/English English/Spanish bilingual dictionaries and one specialized dictionary. Does this by analyzing 30 words that Torrents des Prats determined have at least nine different sense discriminations from English into Spanish. Larousse was found to be far superior to the others. (SED)

  10. Aleut Dictionary (Unangam Tunudgusii). An Unabridged Lexicon of the Aleutian, Pribilof, and Commander Islands Aleut Language.

    ERIC Educational Resources Information Center

    Bergsland, Knut, Comp.

    This comprehensive dictionary draws on ethnographic and linguistic work of the Aleut language and culture dating to 1745. An introductory section explains the dictionary's format, offers a brief historical survey, and contains notes on Aleut phonology and orthography, dialectal differences and developments, Eskimo-Aleut phonological…

  11. Usage and Efficacy of Electronic Dictionaries for a Language without Word Boundaries

    ERIC Educational Resources Information Center

    Toyoda, Etsuko

    2016-01-01

    There is cumulative evidence suggesting that hyper-glossing facilitates lower-level processing and enhances reading comprehension. There are plentiful studies on electronic dictionaries for English. However, research on e-dictionaries for languages with no boundaries between words is still scarce. The main aim for the current study is to…

  12. Chinese-English Technical Dictionaries. Volume 1, Aviation and Space.

    ERIC Educational Resources Information Center

    Library of Congress, Washington, DC. Aerospace Technology Div.

    The present dictionary is the first of a series of Chinese-English technical dictionaries under preparation by the Aerospace Technology Division of the Library of Congress. The purpose of the series is to provide rapid reference tools for translators, abstractors, and research analysts concerned with scientific and technical materials published in…

  13. Chinese-English and English-Chinese Dictionaries in the Library of Congress. An Annotated Bibliography.

    ERIC Educational Resources Information Center

    Dunn, Robert, Comp.

    An annotated bibliography of the Library of Congress' Chinese-English holdings on all subjects, as well as certain polyglot and multilingual dictionaries with English and Chinese entries. Included are general, encyclopaedic and comprehensive dictionaries; vocabularies; word lists; syllabaries; lists of place names, personal names, nomenclature,…

  14. English-Dari Dictionary.

    ERIC Educational Resources Information Center

    Peace Corps, Washington, DC.

    This 7,000-word dictionary is designed for English speakers learning Dari. The dictionary consists of two parts, the first a reference to find words easily translatable from one language to the other, the second a list of idioms and short phrases commonly used in everyday conversation, yet not readily translatable. Many of these entries have no…

  15. An Electronic Dictionary and Translation System for Murrinh-Patha

    ERIC Educational Resources Information Center

    Seiss, Melanie; Nordlinger, Rachel

    2012-01-01

    This paper presents an electronic dictionary and translation system for the Australian language Murrinh-Patha. Its complex verbal structure makes learning Murrinh-Patha very difficult. Design learning materials or a dictionary which is easy to understand and to use also presents a challenge. This paper discusses some of the difficulties posed by…

  16. Linguistic and Cultural Strategies in ELT Dictionaries

    ERIC Educational Resources Information Center

    Corrius, Montse; Pujol, Didac

    2010-01-01

    There are three main types of ELT dictionaries: monolingual, bilingual, and bilingualized. Each type of dictionary, while having its own advantages, also hinders the learning of English as a foreign language and culture in so far as it is written from a homogenizing (linguistic- and culture-centric) perspective. This paper presents a new type of…

  17. Dictionaries of African Sign Languages: An Overview

    ERIC Educational Resources Information Center

    Schmaling, Constanze H.

    2012-01-01

    This article gives an overview of dictionaries of African sign languages that have been published to date most of which have not been widely distributed. After an introduction into the field of sign language lexicography and a discussion of some of the obstacles that authors of sign language dictionaries face in general, I will show problems…

  18. Supporting Social Studies Reading Comprehension with an Electronic Pop-Up Dictionary

    ERIC Educational Resources Information Center

    Fry, Sara Winstead; Gosky, Ross

    2008-01-01

    This study investigated how middle school students' comprehension was impacted by reading social studies texts online with a pop-up dictionary function for every word in the text. A quantitative counterbalance design was used to determine how 129 middle school students' reading comprehension test scores for the pop-up dictionary reading differed…

  19. Paper, Electronic or Online? Different Dictionaries for Different Activities

    ERIC Educational Resources Information Center

    Pasfield-Neofitou, Sarah

    2009-01-01

    Despite research suggesting that teachers highly influence their students' knowledge and use of language learning resources such as dictionaries (Loucky, 2005; Yamane, 2006), it appears that dictionary selection and use is considered something to be dealt with outside the classroom. As a result, many students receive too little advice to be able…

  20. 76 FR 10055 - Changes to the Public Housing Assessment System (PHAS): Physical Condition Scoring Notice

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-02-23

    ... Weights and Criticality Levels, and Dictionary of Deficiency Definitions The Item Weights and Criticality Levels tables and the Dictionary of Deficiency Definitions, currently in use, were published as... Dictionary of Deficiency Definitions is found at http://www.hud.gov/offices/reac/pdf/pass_dict2.3.pdf . V...

  1. Evaluating Online Bilingual Dictionaries: The Case of Popular Free English-Polish Dictionaries

    ERIC Educational Resources Information Center

    Lew, Robert; Szarowska, Agnieszka

    2017-01-01

    Language learners today exhibit a strong preference for free online resources. One problem with such resources is that their quality can vary dramatically. Building on related work on monolingual resources for English, we propose an evaluation framework for online bilingual dictionaries, designed to assess lexicographic quality in four major…

  2. A Dictionary of Hindi Verbal Expressions (Hindi-English). Final Report.

    ERIC Educational Resources Information Center

    Bahl, Kali Charan, Comp.

    This dictionary covers approximately 28,277 verbal expressions in modern standard Hindi and their rendered English equivalents. The study lists longer verbal expressions which are generally matched by single verbs in English. The lexicographer notes that the majority of entries in this dictionary do not appear in their present form in most other…

  3. Aspects of Sentence Retrieval

    DTIC Science & Technology

    2006-09-01

    English-to-Arabic-to-English Lexicon . . . . . . . . . . . . . . . . . . . . . 89 6.2.4 A WordNet Probabilistic Dictionary ...19 4.1 Examples of “translations” of the terms “zebra” and “galileo” from a translation dictionary trained...106 6.13 Comparing the use of WordNet as a translation table, and as a dictionary during the training of a translation table

  4. Bilingualised Dictionaries: How Learners Really Use Them.

    ERIC Educational Resources Information Center

    Laufer, Batia; Kimmel, Michal

    1997-01-01

    Seventy native Hebrew-speaking English-as-a-Second-Language students participated in a study that investigated what part of an entry second-language learners read when they look up an unfamiliar word in a bilingualised dictionary: the monolingual, the bilingual, or both. Results suggest the bilingualised dictionary is very effective because it is…

  5. Dictionnaires du francais langue etrangere (Dictionaries for French as a Second Language).

    ERIC Educational Resources Information Center

    Gross, Gaston; Ibrahim, Amr

    1981-01-01

    Examines the purposes served by native language dictionaries as an introduction to the review of three monolingual French dictionaries for foreigners. Devotes particular attention to the most recent, the "Dictionnaire du francais langue etrangere", published by Larousse. Stresses the characteristics that are considered desirable for this type of…

  6. Assigning categorical information to Japanese medical terms using MeSH and MEDLINE.

    PubMed

    Onogi, Yuzo

    2007-01-01

    This paper reports on the assigning of MeSH (Medical Subject Headings) categories to Japanese terms in an English-Japanese dictionary using the titles and abstracts of articles indexed in MEDLINE. In a previous study, 30,000 of 80,000 terms in the dictionary were mapped to MeSH terms by normalized comparison. It was reasoned that if the remaining dictionary terms appeared in MEDLINE-indexed articles that are indexed using MeSH terms, then relevancies between the dictionary terms and MeSH terms could be calculated, and thus MeSH categories assigned. This study compares two approaches for calculating the weight matrix. One is the TF*IDF method and the other uses the inner product of two weight matrices. About 20,000 additional dictionary terms were identified in MEDLINE-indexed articles published between 2000 and 2004. The precision and recall of these algorithms were evaluated separately for MeSH terms and non-MeSH terms. Unfortunately, the precision and recall of the algorithms was not good, but this method will help with manual assignment of MeSH categories to dictionary terms.

  7. Double-dictionary matching pursuit for fault extent evaluation of rolling bearing based on the Lempel-Ziv complexity

    NASA Astrophysics Data System (ADS)

    Cui, Lingli; Gong, Xiangyang; Zhang, Jianyu; Wang, Huaqing

    2016-12-01

    The quantitative diagnosis of rolling bearing fault severity is particularly crucial to realize a proper maintenance decision. Aiming at the fault feature of rolling bearing, a novel double-dictionary matching pursuit (DDMP) for fault extent evaluation of rolling bearing based on the Lempel-Ziv complexity (LZC) index is proposed in this paper. In order to match the features of rolling bearing fault, the impulse time-frequency dictionary and modulation dictionary are constructed to form the double-dictionary by using the method of parameterized function model. Then a novel matching pursuit method is proposed based on the new double-dictionary. For rolling bearing vibration signals with different fault sizes, the signals are decomposed and reconstructed by the DDMP. After the noise reduced and signals reconstructed, the LZC index is introduced to realize the fault extent evaluation. The applications of this method to the fault experimental signals of bearing outer race and inner race with different degree of injury have shown that the proposed method can effectively realize the fault extent evaluation.

  8. Improving the dictionary lookup approach for disease normalization using enhanced dictionary and query expansion

    PubMed Central

    Jonnagaddala, Jitendra; Jue, Toni Rose; Chang, Nai-Wen; Dai, Hong-Jie

    2016-01-01

    The rapidly increasing biomedical literature calls for the need of an automatic approach in the recognition and normalization of disease mentions in order to increase the precision and effectivity of disease based information retrieval. A variety of methods have been proposed to deal with the problem of disease named entity recognition and normalization. Among all the proposed methods, conditional random fields (CRFs) and dictionary lookup method are widely used for named entity recognition and normalization respectively. We herein developed a CRF-based model to allow automated recognition of disease mentions, and studied the effect of various techniques in improving the normalization results based on the dictionary lookup approach. The dataset from the BioCreative V CDR track was used to report the performance of the developed normalization methods and compare with other existing dictionary lookup based normalization methods. The best configuration achieved an F-measure of 0.77 for the disease normalization, which outperformed the best dictionary lookup based baseline method studied in this work by an F-measure of 0.13. Database URL: https://github.com/TCRNBioinformatics/DiseaseExtract PMID:27504009

  9. A dictionary without definitions: romanticist science in the production and presentation of the Grimm brothers' German dictionary, 1838-1863.

    PubMed

    Kistner, Kelly

    2014-12-01

    Between 1838 and 1863 the Grimm brothers led a collaborative research project to create a new kind of dictionary documenting the history of the German language. They imagined the work would present a scientific account of linguistic cohesiveness and strengthen German unity. However, their dictionary volumes (most of which were arranged and written by Jacob Grimm) would be variously criticized for their idiosyncratic character and ultimately seen as a poor, and even prejudicial, piece of scholarship. This paper argues that such criticisms may reflect a misunderstanding of the dictionary. I claim it can be best understood as an artifact of romanticist science and its epistemological privileging of subjective perception coupled with a deeply-held faith in inter-subjective congruence. Thus situated, it is a rare and detailed case of Romantic ideas and ideals applied to the scientific study of social artifacts. Moreover, the dictionary's organization, reception, and legacy provide insights into the changing landscape of scientific practice in Germany, showcasing the difficulties of implementing a romanticist vision of science amidst widening gaps between the public and professionals, generalists and specialists.

  10. Chinese-English Rocketry Dictionary. Volume 1

    DTIC Science & Technology

    1977-10-01

    ftb9UJi22by 7602nd Air Intelligence Group C v&-O ,) , dctober 1977 AP S0~ 4, 48 ~ DIMIIBUTION STATEMENT A. Approved for public releaM, Distribution...Unlimited Reproduction of all or ary part of this document without the written consent of 7602nd Air Intelligence Group is strictly prohibited __ 1-: n__...Staff for Intelligence, United Statts Air Force, the 7602nd Air Intelligence Group reactivated it it, April 1977 to complete the series, making use of

  11. Standardized Measures of Merit (MOM) Dictionary.

    DTIC Science & Technology

    1979-03-20

    BLIP) of the target. 3. Coordinate transformations and rotation routines are required to compare the differences between the reference track and...AD-AI24 070 STANDARDIZED MEASURES OF MERIT (MOM) OICTIONARY(U) AI R I/lt FORCE ELECTRONIC WARFARE C ENTE R K ELLY AFR TX 20 MAR 79 UNCLASSIFEF/G5/2 5...SECURITY CLASSIFIIATIONd Of THIS5 PAGE READ Data Entered) REPORT DOCUMENTATION PAGE BEFORE COMPLETING FORM REPOR NUME.R ~GOVT ACCESSION NO. 3. RECIPIENT’S

  12. Image fusion using sparse overcomplete feature dictionaries

    DOEpatents

    Brumby, Steven P.; Bettencourt, Luis; Kenyon, Garrett T.; Chartrand, Rick; Wohlberg, Brendt

    2015-10-06

    Approaches for deciding what individuals in a population of visual system "neurons" are looking for using sparse overcomplete feature dictionaries are provided. A sparse overcomplete feature dictionary may be learned for an image dataset and a local sparse representation of the image dataset may be built using the learned feature dictionary. A local maximum pooling operation may be applied on the local sparse representation to produce a translation-tolerant representation of the image dataset. An object may then be classified and/or clustered within the translation-tolerant representation of the image dataset using a supervised classification algorithm and/or an unsupervised clustering algorithm.

  13. Accurate classification of brain gliomas by discriminate dictionary learning based on projective dictionary pair learning of proton magnetic resonance spectra.

    PubMed

    Adebileje, Sikiru Afolabi; Ghasemi, Keyvan; Aiyelabegan, Hammed Tanimowo; Saligheh Rad, Hamidreza

    2017-04-01

    Proton magnetic resonance spectroscopy is a powerful noninvasive technique that complements the structural images of cMRI, which aids biomedical and clinical researches, by identifying and visualizing the compositions of various metabolites within the tissues of interest. However, accurate classification of proton magnetic resonance spectroscopy is still a challenging issue in clinics due to low signal-to-noise ratio, overlapping peaks of metabolites, and the presence of background macromolecules. This paper evaluates the performance of a discriminate dictionary learning classifiers based on projective dictionary pair learning method for brain gliomas proton magnetic resonance spectroscopy spectra classification task, and the result were compared with the sub-dictionary learning methods. The proton magnetic resonance spectroscopy data contain a total of 150 spectra (74 healthy, 23 grade II, 23 grade III, and 30 grade IV) from two databases. The datasets from both databases were first coupled together, followed by column normalization. The Kennard-Stone algorithm was used to split the datasets into its training and test sets. Performance comparison based on the overall accuracy, sensitivity, specificity, and precision was conducted. Based on the overall accuracy of our classification scheme, the dictionary pair learning method was found to outperform the sub-dictionary learning methods 97.78% compared with 68.89%, respectively. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  14. Situated Word Learning: Words of the Year (WsOY) and Social Studies Inquiry

    ERIC Educational Resources Information Center

    Heafner, Tina L.; Triplett, Nicholas; Handler, Laura; Massey, Dixie

    2018-01-01

    Current events influence public interest and drive Internet word searches. For over a decade, linguists and dictionary publishers have analyzed big data from Internet word searches to designate "Words of the Year" (WsOY). In this study, we examine how WsOY can foster critical digital literacy and illuminate essential aspects of inquiry…

  15. The Yale Kamusi Project: A Swahili-English, English-Swahili Dictionary.

    ERIC Educational Resources Information Center

    Hinnebusch, Thomas

    2001-01-01

    Evaluates the strengths and weaknesses of the Yale Online Kamusi project, an electronic Web-based Swahili-English and English-Swahili dictionary. The dictionary is described and checked for comprehensiveness, the adequacy and quality of the glosses and definitions are tested, and a number of recommendations are made to help make it a better and…

  16. Dictionary Form in Decoding, Encoding and Retention: Further Insights

    ERIC Educational Resources Information Center

    Dziemianko, Anna

    2017-01-01

    The aim of the paper is to investigate the role of dictionary form (paper versus electronic) in language reception, production and retention. The body of existing research does not give a clear answer as to which dictionary medium benefits users more. Divergent findings from many studies into the topic might stem from differences in research…

  17. Review of EFL Learners' Habits in the Use of Pedagogical Dictionaries

    ERIC Educational Resources Information Center

    El-Sayed, Al-Nauman Al-Amin Ali; Siddiek, Ahmed Gumaa

    2013-01-01

    A dictionary is an important device for both: EFL teachers and EFL learners. It is highly needed to conduct effective teaching and learning. Many investigations were carried out to study the foreign language learners' habits in the use of their dictionaries in reading, writing, testing and translating. This paper is shedding light on this issue;…

  18. 75 FR 17003 - Reconsideration of Interpretation of Regulations That Determine Pollutants Covered by Clean Air...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-04-02

    ... Law Dictionary (8th Ed.) is ``the act or process of controlling by rule or restriction.'' However, an alternative meaning in this same dictionary defines the term as ``a rule or order, having legal force, usu. issued by an administrative agency or local government.'' The primary meaning in Webster's dictionary for...

  19. The Efficacy of Dictionary Use while Reading for Learning New Words

    ERIC Educational Resources Information Center

    Hamilton, Harley

    2012-01-01

    This paper describes a study investigating the use of three types of dictionaries by deaf (i.e., with severe to profound hearing loss) high school students while reading to determine the effectiveness of each type for acquiring the meanings of unknown vocabulary in text. The dictionary types used include an online bilingual multimedia English-ASL…

  20. Bilingualised or Monolingual Dictionaries? Preferences and Practices of Advanced ESL Learners in Hong Kong

    ERIC Educational Resources Information Center

    Chan, Alice Y. W.

    2011-01-01

    This article reports on the results of a questionnaire and interview survey on Cantonese ESL learners' preference for bilingualised dictionaries or monolingual dictionaries. The questionnaire survey was implemented with about 160 university English majors in Hong Kong and three focus group interviews were conducted with 14 of these participants.…

  1. Dictionary Use of Undergraduate Students in Foreign Language Departments in Turkey at Present

    ERIC Educational Resources Information Center

    Tulgar, Aysegül Takkaç

    2017-01-01

    Foreign language learning has always been a process carried out with the help of dictionaries which are both in target language and from native language to target language/from target language to native language. Dictionary use is an especially delicate issue for students in foreign language departments because students in those departments are…

  2. The Effect of a Simplified English Language Dictionary on a Reading Test. LEP Projects Report 1.

    ERIC Educational Resources Information Center

    Albus, Deb; Bielinski, John; Thurlow, Martha; Liu, Kristin

    This study was conducted to examine whether using a monolingual, simplified English dictionary as an accommodation on a reading test with limited-English-proficient (LEP) Hmong students improved test performance. Hmong students were chosen because they are often not literate in their first language. For these students, bilingual dictionaries are…

  3. Dictionaries without Borders: Expanding the Limits of the Academy

    ERIC Educational Resources Information Center

    Miller, Julia

    2012-01-01

    Many people imagine dictionaries to be bulky tomes that are hard to lift and are only useful for quick translations or to check the meaning or spelling of difficult words. This paper aims to dispel that myth and show how online versions of monolingual English learners' dictionaries (MELDs) can be used pedagogically to engage students in academic…

  4. A Selected Bibliography of Dictionaries. General Information Series, No. 9. Indochinese Refugee Education Guides.

    ERIC Educational Resources Information Center

    Center for Applied Linguistics, Arlington, VA.

    This is a selected, annotated bibliography of dictionaries useful to Indochinese refugees. The purpose of this guide is to provide the American teacher or sponsor with information on the use, limitations and availability of monolingual and bilingual dictionaries which can be used by refugees. The bibliography is preceded by notes on problems with…

  5. The Role of Electronic Pocket Dictionaries as an English Learning Tool among Chinese Students

    ERIC Educational Resources Information Center

    Jian, Hua-Li; Sandnes, Frode Eika; Law, Kris M. Y.; Huang, Yo-Ping; Huang, Yueh-Min

    2009-01-01

    This study addressed the role of electronic pocket dictionaries as a language learning tool among university students in Hong Kong and Taiwan. The target groups included engineering and humanities students at both undergraduate and graduate level. Speed of reference was found to be the main motivator for using an electronic pocket dictionary.…

  6. The Vertical Dust Profile over Gale Crater

    NASA Astrophysics Data System (ADS)

    Guzewich, S.; Newman, C. E.; Smith, M. D.; Moores, J.; Smith, C. L.; Moore, C.; Richardson, M. I.; Kass, D. M.; Kleinboehl, A.; Martin-Torres, F. J.; Zorzano, M. P.; Battalio, J. M.

    2017-12-01

    Regular joint observations of the atmosphere over Gale Crater from the orbiting Mars Reconnaissance Orbiter/Mars Climate Sounder (MCS) and Mars Science Laboratory (MSL) Curiosity rover allow us to create a coarse, but complete, vertical profile of dust mixing ratio from the surface to the upper atmosphere. We split the atmospheric column into three regions: the planetary boundary layer (PBL) within Gale Crater that is directly sampled by MSL (typically extending from the surface to 2-6 km in height), the region of atmosphere sampled by MCS profiles (typically 25-80 km above the surface), and the region of atmosphere between these two layers. Using atmospheric optical depth measurements from the Rover Environmental Monitoring System (REMS) ultraviolet photodiodes (in conjunction with MSL Mast Camera solar imaging), line-of-sight opacity measurements with the MSL Navigation Cameras (NavCam), and an estimate of the PBL depth from the MarsWRF general circulation model, we can directly calculate the dust mixing ratio within the Gale Crater PBL and then solve for the dust mixing ratio in the middle layer above Gale Crater but below the atmosphere sampled by MCS. Each atmospheric layer has a unique seasonal cycle of dust opacity, with Gale Crater's PBL reaching a maximum in dust mixing ratio near Ls = 270° and a minimum near Ls = 90°. The layer above Gale Crater, however, has a seasonal cycle that closely follows the global opacity cycle and reaches a maximum near Ls = 240° and exhibits a local minimum (associated with the "solsticial pauses") near Ls = 270°. Knowing the complete vertical profile also allows us to determine the frequency of high-altitude dust layers above Gale, and whether such layers truly exhibit the maximum dust mixing ratio within the entire vertical column. We find that 20% of MCS profiles contain an "absolute" high-altitude dust layer, i.e., one in which the dust mixing ratio within the high-altitude dust layer is the maximum dust mixing ratio in the vertical column of atmosphere over Gale Crater.

  7. Progressive multi-atlas label fusion by dictionary evolution.

    PubMed

    Song, Yantao; Wu, Guorong; Bahrami, Khosro; Sun, Quansen; Shen, Dinggang

    2017-02-01

    Accurate segmentation of anatomical structures in medical images is important in recent imaging based studies. In the past years, multi-atlas patch-based label fusion methods have achieved a great success in medical image segmentation. In these methods, the appearance of each input image patch is first represented by an atlas patch dictionary (in the image domain), and then the latent label of the input image patch is predicted by applying the estimated representation coefficients to the corresponding anatomical labels of the atlas patches in the atlas label dictionary (in the label domain). However, due to the generally large gap between the patch appearance in the image domain and the patch structure in the label domain, the estimated (patch) representation coefficients from the image domain may not be optimal for the final label fusion, thus reducing the labeling accuracy. To address this issue, we propose a novel label fusion framework to seek for the suitable label fusion weights by progressively constructing a dynamic dictionary in a layer-by-layer manner, where the intermediate dictionaries act as a sequence of guidance to steer the transition of (patch) representation coefficients from the image domain to the label domain. Our proposed multi-layer label fusion framework is flexible enough to be applied to the existing labeling methods for improving their label fusion performance, i.e., by extending their single-layer static dictionary to the multi-layer dynamic dictionary. The experimental results show that our proposed progressive label fusion method achieves more accurate hippocampal segmentation results for the ADNI dataset, compared to the counterpart methods using only the single-layer static dictionary. Copyright © 2016 Elsevier B.V. All rights reserved.

  8. A Removal of Eye Movement and Blink Artifacts from EEG Data Using Morphological Component Analysis

    PubMed Central

    Wagatsuma, Hiroaki

    2017-01-01

    EEG signals contain a large amount of ocular artifacts with different time-frequency properties mixing together in EEGs of interest. The artifact removal has been substantially dealt with by existing decomposition methods known as PCA and ICA based on the orthogonality of signal vectors or statistical independence of signal components. We focused on the signal morphology and proposed a systematic decomposition method to identify the type of signal components on the basis of sparsity in the time-frequency domain based on Morphological Component Analysis (MCA), which provides a way of reconstruction that guarantees accuracy in reconstruction by using multiple bases in accordance with the concept of “dictionary.” MCA was applied to decompose the real EEG signal and clarified the best combination of dictionaries for this purpose. In our proposed semirealistic biological signal analysis with iEEGs recorded from the brain intracranially, those signals were successfully decomposed into original types by a linear expansion of waveforms, such as redundant transforms: UDWT, DCT, LDCT, DST, and DIRAC. Our result demonstrated that the most suitable combination for EEG data analysis was UDWT, DST, and DIRAC to represent the baseline envelope, multifrequency wave-forms, and spiking activities individually as representative types of EEG morphologies. PMID:28194221

  9. Dictionary learning-based CT detection of pulmonary nodules

    NASA Astrophysics Data System (ADS)

    Wu, Panpan; Xia, Kewen; Zhang, Yanbo; Qian, Xiaohua; Wang, Ge; Yu, Hengyong

    2016-10-01

    Segmentation of lung features is one of the most important steps for computer-aided detection (CAD) of pulmonary nodules with computed tomography (CT). However, irregular shapes, complicated anatomical background and poor pulmonary nodule contrast make CAD a very challenging problem. Here, we propose a novel scheme for feature extraction and classification of pulmonary nodules through dictionary learning from training CT images, which does not require accurately segmented pulmonary nodules. Specifically, two classification-oriented dictionaries and one background dictionary are learnt to solve a two-category problem. In terms of the classification-oriented dictionaries, we calculate sparse coefficient matrices to extract intrinsic features for pulmonary nodule classification. The support vector machine (SVM) classifier is then designed to optimize the performance. Our proposed methodology is evaluated with the lung image database consortium and image database resource initiative (LIDC-IDRI) database, and the results demonstrate that the proposed strategy is promising.

  10. Magnetic Resonance Super-resolution Imaging Measurement with Dictionary-optimized Sparse Learning

    NASA Astrophysics Data System (ADS)

    Li, Jun-Bao; Liu, Jing; Pan, Jeng-Shyang; Yao, Hongxun

    2017-06-01

    Magnetic Resonance Super-resolution Imaging Measurement (MRIM) is an effective way of measuring materials. MRIM has wide applications in physics, chemistry, biology, geology, medical and material science, especially in medical diagnosis. It is feasible to improve the resolution of MR imaging through increasing radiation intensity, but the high radiation intensity and the longtime of magnetic field harm the human body. Thus, in the practical applications the resolution of hardware imaging reaches the limitation of resolution. Software-based super-resolution technology is effective to improve the resolution of image. This work proposes a framework of dictionary-optimized sparse learning based MR super-resolution method. The framework is to solve the problem of sample selection for dictionary learning of sparse reconstruction. The textural complexity-based image quality representation is proposed to choose the optimal samples for dictionary learning. Comprehensive experiments show that the dictionary-optimized sparse learning improves the performance of sparse representation.

  11. A feature dictionary supporting a multi-domain medical knowledge base.

    PubMed

    Naeymi-Rad, F

    1989-01-01

    Because different terminology is used by physicians of different specialties in different locations to refer to the same feature (signs, symptoms, test results), it is essential that our knowledge development tools provide a means to access a common pool of terms. This paper discusses the design of an online medical dictionary that provides a solution to this problem for developers of multi-domain knowledge bases for MEDAS (Medical Emergency Decision Assistance System). Our Feature Dictionary supports phrase equivalents for features, feature interactions, feature classifications, and translations to the binary features generated by the expert during knowledge creation. It is also used in the conversion of a domain knowledge to the database used by the MEDAS inference diagnostic sessions. The Feature Dictionary also provides capabilities for complex queries across multiple domains using the supported relations. The Feature Dictionary supports three methods for feature representation: (1) for binary features, (2) for continuous valued features, and (3) for derived features.

  12. Embedded sparse representation of fMRI data via group-wise dictionary optimization

    NASA Astrophysics Data System (ADS)

    Zhu, Dajiang; Lin, Binbin; Faskowitz, Joshua; Ye, Jieping; Thompson, Paul M.

    2016-03-01

    Sparse learning enables dimension reduction and efficient modeling of high dimensional signals and images, but it may need to be tailored to best suit specific applications and datasets. Here we used sparse learning to efficiently represent functional magnetic resonance imaging (fMRI) data from the human brain. We propose a novel embedded sparse representation (ESR), to identify the most consistent dictionary atoms across different brain datasets via an iterative group-wise dictionary optimization procedure. In this framework, we introduced additional criteria to make the learned dictionary atoms more consistent across different subjects. We successfully identified four common dictionary atoms that follow the external task stimuli with very high accuracy. After projecting the corresponding coefficient vectors back into the 3-D brain volume space, the spatial patterns are also consistent with traditional fMRI analysis results. Our framework reveals common features of brain activation in a population, as a new, efficient fMRI analysis method.

  13. Localized Dictionaries Based Orientation Field Estimation for Latent Fingerprints.

    PubMed

    Xiao Yang; Jianjiang Feng; Jie Zhou

    2014-05-01

    Dictionary based orientation field estimation approach has shown promising performance for latent fingerprints. In this paper, we seek to exploit stronger prior knowledge of fingerprints in order to further improve the performance. Realizing that ridge orientations at different locations of fingerprints have different characteristics, we propose a localized dictionaries-based orientation field estimation algorithm, in which noisy orientation patch at a location output by a local estimation approach is replaced by real orientation patch in the local dictionary at the same location. The precondition of applying localized dictionaries is that the pose of the latent fingerprint needs to be estimated. We propose a Hough transform-based fingerprint pose estimation algorithm, in which the predictions about fingerprint pose made by all orientation patches in the latent fingerprint are accumulated. Experimental results on challenging latent fingerprint datasets show the proposed method outperforms previous ones markedly.

  14. Incoherent dictionary learning for reducing crosstalk noise in least-squares reverse time migration

    NASA Astrophysics Data System (ADS)

    Wu, Juan; Bai, Min

    2018-05-01

    We propose to apply a novel incoherent dictionary learning (IDL) algorithm for regularizing the least-squares inversion in seismic imaging. The IDL is proposed to overcome the drawback of traditional dictionary learning algorithm in losing partial texture information. Firstly, the noisy image is divided into overlapped image patches, and some random patches are extracted for dictionary learning. Then, we apply the IDL technology to minimize the coherency between atoms during dictionary learning. Finally, the sparse representation problem is solved by a sparse coding algorithm, and image is restored by those sparse coefficients. By reducing the correlation among atoms, it is possible to preserve most of the small-scale features in the image while removing much of the long-wavelength noise. The application of the IDL method to regularization of seismic images from least-squares reverse time migration shows successful performance.

  15. Fast dictionary generation and searching for magnetic resonance fingerprinting.

    PubMed

    Jun Xie; Mengye Lyu; Jian Zhang; Hui, Edward S; Wu, Ed X; Ze Wang

    2017-07-01

    A super-fast dictionary generation and searching (DGS) algorithm was developed for MR parameter quantification using magnetic resonance fingerprinting (MRF). MRF is a new technique for simultaneously quantifying multiple MR parameters using one temporally resolved MR scan. But it has a multiplicative computation complexity, resulting in a big burden of dictionary generating, saving, and retrieving, which can easily be intractable for any state-of-art computers. Based on retrospective analysis of the dictionary matching object function, a multi-scale ZOOM like DGS algorithm, dubbed as MRF-ZOOM, was proposed. MRF ZOOM is quasi-parameter-separable so the multiplicative computation complexity is broken into additive one. Evaluations showed that MRF ZOOM was hundreds or thousands of times faster than the original MRF parameter quantification method even without counting the dictionary generation time in. Using real data, it yielded nearly the same results as produced by the original method. MRF ZOOM provides a super-fast solution for MR parameter quantification.

  16. Sentiment analysis of political communication: combining a dictionary approach with crowdcoding.

    PubMed

    Haselmayer, Martin; Jenny, Marcelo

    2017-01-01

    Sentiment is important in studies of news values, public opinion, negative campaigning or political polarization and an explosive expansion of digital textual data and fast progress in automated text analysis provide vast opportunities for innovative social science research. Unfortunately, tools currently available for automated sentiment analysis are mostly restricted to English texts and require considerable contextual adaption to produce valid results. We present a procedure for collecting fine-grained sentiment scores through crowdcoding to build a negative sentiment dictionary in a language and for a domain of choice. The dictionary enables the analysis of large text corpora that resource-intensive hand-coding struggles to cope with. We calculate the tonality of sentences from dictionary words and we validate these estimates with results from manual coding. The results show that the crowdbased dictionary provides efficient and valid measurement of sentiment. Empirical examples illustrate its use by analyzing the tonality of party statements and media reports.

  17. Phosphoric and electric utility fuel cell technology development

    NASA Astrophysics Data System (ADS)

    Breault, R. D.; Briggs, T. A.; Congdon, J. V.; Gelting, R. L.; Goller, G. J.; Luoma, W. L.; McCloskey, M. W.; Mientek, A. P.; Obrien, J. J.; Randall, S. A.

    1985-01-01

    A subscale cell containing GSB-18, dry mix catalyst has accumulated over 6500 hours with performance 10 mV above E-line at 120 psia and 400 F. Over 150 thick separator plates were molded for use in cooler assemblies. The full-size 10-ft, 460 cell structural work-up is completed. All repeat components for the next 10-ft short stack are formed and processed.

  18. Piecewise synonyms for enhanced UMLS source terminology integration.

    PubMed

    Huang, Kuo-Chuan; Geller, James; Halper, Michael; Cimino, James J

    2007-10-11

    The UMLS contains more than 100 source vocabularies and is growing via the integration of others. When integrating a new source, the source terms already in the UMLS must first be found. The easiest approach to this is simple string matching. However, string matching usually does not find all concepts that should be found. A new methodology, based on the notion of piecewise synonyms, for enhancing the process of concept discovery in the UMLS is presented. This methodology is supported by first creating a general synonym dictionary based on the UMLS. Each multi-word source term is decomposed into its component words, allowing for the generation of separate synonyms for each word from the general synonym dictionary. The recombination of these synonyms into new terms creates an expanded pool of matching candidates for terms from the source. The methodology is demonstrated with respect to an existing UMLS source. It shows a 34% improvement over simple string matching.

  19. A guided wave dispersion compensation method based on compressed sensing

    NASA Astrophysics Data System (ADS)

    Xu, Cai-bin; Yang, Zhi-bo; Chen, Xue-feng; Tian, Shao-hua; Xie, Yong

    2018-03-01

    The ultrasonic guided wave has emerged as a promising tool for structural health monitoring (SHM) and nondestructive testing (NDT) due to their capability to propagate over long distances with minimal loss and sensitivity to both surface and subsurface defects. The dispersion effect degrades the temporal and spatial resolution of guided waves. A novel ultrasonic guided wave processing method for both single mode and multi-mode guided waves dispersion compensation is proposed in this work based on compressed sensing, in which a dispersion signal dictionary is built by utilizing the dispersion curves of the guided wave modes in order to sparsely decompose the recorded dispersive guided waves. Dispersion-compensated guided waves are obtained by utilizing a non-dispersion signal dictionary and the results of sparse decomposition. Numerical simulations and experiments are implemented to verify the effectiveness of the developed method for both single mode and multi-mode guided waves.

  20. Manual for Bilingual Dictionaries. Textbook, Word List A-L, and Word List LL-Z.

    ERIC Educational Resources Information Center

    Robinson, Dow F.

    Volume One of this handbook for the preparation of bilingual dictionaries deals with (1) the purpose and structure of the bilingual dictionary for which this manual is designed; (2) the grammatical form of a main entry; (3) the grammatical designation of vernacular entries; (4) gloss in Spanish and vernacular; (5) sense discriminations; (6)…

  1. Digitizing Consumption Across the Operational Spectrum

    DTIC Science & Technology

    2014-09-01

    Figure 14.  Java -implemented Dictionary and Query: Result ............................................22  Figure 15.  Global Database Architecture...format. Figure 14 is an illustration of the query submitted in Java and the result which would be shown using the data shown in Figure 13. Figure...13. NoSQL (key, value) Dictionary Example 22 Figure 14. Java -implemented Dictionary and Query: Result While a

  2. Yeni Redhouse Lugati; Ingilizce-Turkce (Revised Redhouse Dictionairy; English-Turkish).

    ERIC Educational Resources Information Center

    United Church Board for World Ministries, Istanbul (Turkey). Near East Mission.

    The general plan of this dictionary, first prepared by Sir James Redhouse in 1861 and revised in 1950 and 1953, has been to include all words which appear in the Oxford Concise Dictionary and Webster's Collegiate Dictionary. In addition, a great number of idioms have been added; the volume now contains between 60,000 and 70,000 definitions.…

  3. Word Function and Dictionary Use; A Work-Book for Advanced Learners of English.

    ERIC Educational Resources Information Center

    Osman, Neile

    The present volume is designed as a workbook for advanced learners of English as a second or foreign language which will train them through instruction and exercises to use an all-English dictionary. The contents are based on the second edition of Hornby, Gatenby, and Wakefield's "The Advanced Learner's Dictionary of Current English," 1963, Oxford…

  4. Strategies for Reading Chinese Texts with and without Pop-Up Dictionary for Beginning Learners of Chinese

    ERIC Educational Resources Information Center

    Wang, Jing

    2014-01-01

    This study is aimed at identifying reading strategies of beginning learners of Chinese as a foreign language (CFL) with and without a pop-up dictionary and at determining if learners retain the reading comprehension gained from using the dictionary. Beginning CFL learners at a Midwestern university answered questions about their reading strategies…

  5. The Use of a Monolingual Dictionary for Meaning Determination by Advanced Cantonese ESL Learners in Hong Kong

    ERIC Educational Resources Information Center

    Chan, Alice Y. W.

    2012-01-01

    This article reports on the results of a study which investigated advanced Cantonese English as a Second Language (ESL) learners' use of a monolingual dictionary for determining the meanings of familiar English words used in less familiar contexts. Thirty-two university English majors in Hong Kong participated in a dictionary consultation task,…

  6. RoLo: A Dictionary Interface that Minimizes Extraneous Cognitive Load of Lookup and Supports Incidental and Incremental Learning of Vocabulary

    ERIC Educational Resources Information Center

    Dang, Thanh-Dung; Chen, Gwo-Dong; Dang, Giao; Li, Liang-Yi; Nurkhamid

    2013-01-01

    Dictionary use can improve reading comprehension and incidental vocabulary learning. Nevertheless, great extraneous cognitive load imposed by the search process may reduce or even prevent the improvement. With the help of technology, dictionary users can now instantly access the meaning list of a searched word using a mouse click. However, they…

  7. Terminological Multifaceted Educational Dictionary of Active Type as a Possible Way of Special Discourse Presentation

    ERIC Educational Resources Information Center

    Fatkullina, Flyuza; Morozkina, Eugenia; Suleimanova, Almira; Khayrullina, Rayca

    2016-01-01

    The purpose of this article is to disclose the scientific basis of the author's academic terminological dictionary for future oil industry experts. Multifaceted terminological dictionary with several different entries is considered to be one of the possible ways to present a special discourse in the classroom. As a result of the study the authors…

  8. Testing Aspects of the Usability of an Online Learner Dictionary Prototype: A Product- and Process-Oriented Study

    ERIC Educational Resources Information Center

    Hamel, Marie-Josee

    2012-01-01

    This article reports on a study which took place in the context of the design and development of an online dictionary prototype for learners of French. Aspects of the "usability", i.e. the quality of the "learner-task-dictionary interaction" of the prototype were tested. Micro-tasks were designed to focus on learners'…

  9. In Search of the Optimal Path: How Learners at Task Use an Online Dictionary

    ERIC Educational Resources Information Center

    Hamel, Marie-Josee

    2012-01-01

    We have analyzed circa 180 navigation paths followed by six learners while they performed three language encoding tasks at the computer using an online dictionary prototype. Our hypothesis was that learners who follow an "optimal path" while navigating within the dictionary, using its search and look-up functions, would have a high chance of…

  10. Tactics Employed and Problems Encountered by University English Majors in Hong Kong in Using a Dictionary

    ERIC Educational Resources Information Center

    Chan, Alice Yin Wa

    2005-01-01

    Building on the results of a small-scale survey which investigated the general use of dictionaries by university English majors in Hong Kong using a questionnaire survey and their specific use of dictionaries using an error correction task, this article discusses the tactics these students employed and the problems they encountered when using a…

  11. Standardized terminology for clinical trial protocols based on top-level ontological categories.

    PubMed

    Heller, B; Herre, H; Lippoldt, K; Loeffler, M

    2004-01-01

    This paper describes a new method for the ontologically based standardization of concepts with regard to the quality assurance of clinical trial protocols. We developed a data dictionary for medical and trial-specific terms in which concepts and relations are defined context-dependently. The data dictionary is provided to different medical research networks by means of the software tool Onto-Builder via the internet. The data dictionary is based on domain-specific ontologies and the top-level ontology of GOL. The concepts and relations described in the data dictionary are represented in natural language, semi-formally or formally according to their use.

  12. Application of composite dictionary multi-atom matching in gear fault diagnosis.

    PubMed

    Cui, Lingli; Kang, Chenhui; Wang, Huaqing; Chen, Peng

    2011-01-01

    The sparse decomposition based on matching pursuit is an adaptive sparse expression method for signals. This paper proposes an idea concerning a composite dictionary multi-atom matching decomposition and reconstruction algorithm, and the introduction of threshold de-noising in the reconstruction algorithm. Based on the structural characteristics of gear fault signals, a composite dictionary combining the impulse time-frequency dictionary and the Fourier dictionary was constituted, and a genetic algorithm was applied to search for the best matching atom. The analysis results of gear fault simulation signals indicated the effectiveness of the hard threshold, and the impulse or harmonic characteristic components could be separately extracted. Meanwhile, the robustness of the composite dictionary multi-atom matching algorithm at different noise levels was investigated. Aiming at the effects of data lengths on the calculation efficiency of the algorithm, an improved segmented decomposition and reconstruction algorithm was proposed, and the calculation efficiency of the decomposition algorithm was significantly enhanced. In addition it is shown that the multi-atom matching algorithm was superior to the single-atom matching algorithm in both calculation efficiency and algorithm robustness. Finally, the above algorithm was applied to gear fault engineering signals, and achieved good results.

  13. Sequential Dictionary Learning From Correlated Data: Application to fMRI Data Analysis.

    PubMed

    Seghouane, Abd-Krim; Iqbal, Asif

    2017-03-22

    Sequential dictionary learning via the K-SVD algorithm has been revealed as a successful alternative to conventional data driven methods such as independent component analysis (ICA) for functional magnetic resonance imaging (fMRI) data analysis. fMRI datasets are however structured data matrices with notions of spatio-temporal correlation and temporal smoothness. This prior information has not been included in the K-SVD algorithm when applied to fMRI data analysis. In this paper we propose three variants of the K-SVD algorithm dedicated to fMRI data analysis by accounting for this prior information. The proposed algorithms differ from the K-SVD in their sparse coding and dictionary update stages. The first two algorithms account for the known correlation structure in the fMRI data by using the squared Q, R-norm instead of the Frobenius norm for matrix approximation. The third and last algorithm account for both the known correlation structure in the fMRI data and the temporal smoothness. The temporal smoothness is incorporated in the dictionary update stage via regularization of the dictionary atoms obtained with penalization. The performance of the proposed dictionary learning algorithms are illustrated through simulations and applications on real fMRI data.

  14. Improving the dictionary lookup approach for disease normalization using enhanced dictionary and query expansion.

    PubMed

    Jonnagaddala, Jitendra; Jue, Toni Rose; Chang, Nai-Wen; Dai, Hong-Jie

    2016-01-01

    The rapidly increasing biomedical literature calls for the need of an automatic approach in the recognition and normalization of disease mentions in order to increase the precision and effectivity of disease based information retrieval. A variety of methods have been proposed to deal with the problem of disease named entity recognition and normalization. Among all the proposed methods, conditional random fields (CRFs) and dictionary lookup method are widely used for named entity recognition and normalization respectively. We herein developed a CRF-based model to allow automated recognition of disease mentions, and studied the effect of various techniques in improving the normalization results based on the dictionary lookup approach. The dataset from the BioCreative V CDR track was used to report the performance of the developed normalization methods and compare with other existing dictionary lookup based normalization methods. The best configuration achieved an F-measure of 0.77 for the disease normalization, which outperformed the best dictionary lookup based baseline method studied in this work by an F-measure of 0.13.Database URL: https://github.com/TCRNBioinformatics/DiseaseExtract. © The Author(s) 2016. Published by Oxford University Press.

  15. Structured Kernel Dictionary Learning with Correlation Constraint for Object Recognition.

    PubMed

    Wang, Zhengjue; Wang, Yinghua; Liu, Hongwei; Zhang, Hao

    2017-06-21

    In this paper, we propose a new discriminative non-linear dictionary learning approach, called correlation constrained structured kernel KSVD, for object recognition. The objective function for dictionary learning contains a reconstructive term and a discriminative term. In the reconstructive term, signals are implicitly non-linearly mapped into a space, where a structured kernel dictionary, each sub-dictionary of which lies in the span of the mapped signals from the corresponding class, is established. In the discriminative term, by analyzing the classification mechanism, the correlation constraint is proposed in kernel form, constraining the correlations between different discriminative codes, and restricting the coefficient vectors to be transformed into a feature space, where the features are highly correlated inner-class and nearly independent between-classes. The objective function is optimized by the proposed structured kernel KSVD. During the classification stage, the specific form of the discriminative feature is needless to be known, while the inner product of the discriminative feature with kernel matrix embedded is available, and is suitable for a linear SVM classifier. Experimental results demonstrate that the proposed approach outperforms many state-of-the-art dictionary learning approaches for face, scene and synthetic aperture radar (SAR) vehicle target recognition.

  16. Sparsity based target detection for compressive spectral imagery

    NASA Astrophysics Data System (ADS)

    Boada, David Alberto; Arguello Fuentes, Henry

    2016-09-01

    Hyperspectral imagery provides significant information about the spectral characteristics of objects and materials present in a scene. It enables object and feature detection, classification, or identification based on the acquired spectral characteristics. However, it relies on sophisticated acquisition and data processing systems able to acquire, process, store, and transmit hundreds or thousands of image bands from a given area of interest which demands enormous computational resources in terms of storage, computationm, and I/O throughputs. Specialized optical architectures have been developed for the compressed acquisition of spectral images using a reduced set of coded measurements contrary to traditional architectures that need a complete set of measurements of the data cube for image acquisition, dealing with the storage and acquisition limitations. Despite this improvement, if any processing is desired, the image has to be reconstructed by an inverse algorithm in order to be processed, which is also an expensive task. In this paper, a sparsity-based algorithm for target detection in compressed spectral images is presented. Specifically, the target detection model adapts a sparsity-based target detector to work in a compressive domain, modifying the sparse representation basis in the compressive sensing problem by means of over-complete training dictionaries and a wavelet basis representation. Simulations show that the presented method can achieve even better detection results than the state of the art methods.

  17. Normalizing biomedical terms by minimizing ambiguity and variability

    PubMed Central

    Tsuruoka, Yoshimasa; McNaught, John; Ananiadou, Sophia

    2008-01-01

    Background One of the difficulties in mapping biomedical named entities, e.g. genes, proteins, chemicals and diseases, to their concept identifiers stems from the potential variability of the terms. Soft string matching is a possible solution to the problem, but its inherent heavy computational cost discourages its use when the dictionaries are large or when real time processing is required. A less computationally demanding approach is to normalize the terms by using heuristic rules, which enables us to look up a dictionary in a constant time regardless of its size. The development of good heuristic rules, however, requires extensive knowledge of the terminology in question and thus is the bottleneck of the normalization approach. Results We present a novel framework for discovering a list of normalization rules from a dictionary in a fully automated manner. The rules are discovered in such a way that they minimize the ambiguity and variability of the terms in the dictionary. We evaluated our algorithm using two large dictionaries: a human gene/protein name dictionary built from BioThesaurus and a disease name dictionary built from UMLS. Conclusions The experimental results showed that automatically discovered rules can perform comparably to carefully crafted heuristic rules in term mapping tasks, and the computational overhead of rule application is small enough that a very fast implementation is possible. This work will help improve the performance of term-concept mapping tasks in biomedical information extraction especially when good normalization heuristics for the target terminology are not fully known. PMID:18426547

  18. Implementation and management of a biomedical observation dictionary in a large healthcare information system.

    PubMed

    Vandenbussche, Pierre-Yves; Cormont, Sylvie; André, Christophe; Daniel, Christel; Delahousse, Jean; Charlet, Jean; Lepage, Eric

    2013-01-01

    This study shows the evolution of a biomedical observation dictionary within the Assistance Publique Hôpitaux Paris (AP-HP), the largest European university hospital group. The different steps are detailed as follows: the dictionary creation, the mapping to logical observation identifier names and codes (LOINC), the integration into a multiterminological management platform and, finally, the implementation in the health information system. AP-HP decided to create a biomedical observation dictionary named AnaBio, to map it to LOINC and to maintain the mapping. A management platform based on methods used for knowledge engineering has been put in place. It aims at integrating AnaBio within the health information system and improving both the quality and stability of the dictionary. This new management platform is now active in AP-HP. The AnaBio dictionary is shared by 120 laboratories and currently includes 50 000 codes. The mapping implementation to LOINC reaches 40% of the AnaBio entries and uses 26% of LOINC records. The results of our work validate the choice made to develop a local dictionary aligned with LOINC. This work constitutes a first step towards a wider use of the platform. The next step will support the entire biomedical production chain, from the clinician prescription, through laboratory tests tracking in the laboratory information system to the communication of results and the use for decision support and biomedical research. In addition, the increase in the mapping implementation to LOINC ensures the interoperability allowing communication with other international health institutions.

  19. Dictionary construction and identification of possible adverse drug events in Danish clinical narrative text.

    PubMed

    Eriksson, Robert; Jensen, Peter Bjødstrup; Frankild, Sune; Jensen, Lars Juhl; Brunak, Søren

    2013-01-01

    Drugs have tremendous potential to cure and relieve disease, but the risk of unintended effects is always present. Healthcare providers increasingly record data in electronic patient records (EPRs), in which we aim to identify possible adverse events (AEs) and, specifically, possible adverse drug events (ADEs). Based on the undesirable effects section from the summary of product characteristics (SPC) of 7446 drugs, we have built a Danish ADE dictionary. Starting from this dictionary we have developed a pipeline for identifying possible ADEs in unstructured clinical narrative text. We use a named entity recognition (NER) tagger to identify dictionary matches in the text and post-coordination rules to construct ADE compound terms. Finally, we apply post-processing rules and filters to handle, for example, negations and sentences about subjects other than the patient. Moreover, this method allows synonyms to be identified and anatomical location descriptions can be merged to allow appropriate grouping of effects in the same location. The method identified 1 970 731 (35 477 unique) possible ADEs in a large corpus of 6011 psychiatric hospital patient records. Validation was performed through manual inspection of possible ADEs, resulting in precision of 89% and recall of 75%. The presented dictionary-building method could be used to construct other ADE dictionaries. The complication of compound words in Germanic languages was addressed. Additionally, the synonym and anatomical location collapse improve the method. The developed dictionary and method can be used to identify possible ADEs in Danish clinical narratives.

  20. Technology Acceptance and Course Completion Rates in Online Education: A Non-experimental, Mixed Method Study

    NASA Astrophysics Data System (ADS)

    Allison, Colelia

    As the need for quality online courses increase in demand, the acceptance of technology and completion rates become the focus of higher education. The purpose of this non-experimental, mixed method study was to examine the relationship between the university students' perceptions and acceptance of technology and learner completion rates with respect to the development of online courses. This study involved 61 participants from two universities regarding their perceived usefulness (PU) of technology, intent to use technology, and intent to complete a course. Two research questions were examined regarding student perceptions regarding technology employed in an online course and the relationship, if any, between technology acceptance and completion of an online university course. The technology acceptance model (TAM) was used to collect data on the usefulness of course activities and student intent to complete the course. An open-ended questionnaire was administered to collect information concerning student perceptions of course activities. Quantitative data was analyzed using SPSS and Qualtrics, which indicated there was not a significant relationship between technology acceptance and course completion (p = .154). Qualitative data were examined by pattern matching to create a concept map of the theoretical patterns between constructs. Pattern matching revealed many students favored the use of the Internet over Canvas. Furthermore, data showed students enrolled in online courses because of the flexibility and found the multimedia used in the courses as helpful in course completion. Insight was investigated to offer reasons and decisions concerning choice that were made by the students. Future recommendations are to expand mixed methods studies of technology acceptance in various disciplines to gain a better understanding of student perceptions of technology uses, intent to use, and course completion.

  1. Dictionary-driven prokaryotic gene finding.

    PubMed

    Shibuya, Tetsuo; Rigoutsos, Isidore

    2002-06-15

    Gene identification, also known as gene finding or gene recognition, is among the important problems of molecular biology that have been receiving increasing attention with the advent of large scale sequencing projects. Previous strategies for solving this problem can be categorized into essentially two schools of thought: one school employs sequence composition statistics, whereas the other relies on database similarity searches. In this paper, we propose a new gene identification scheme that combines the best characteristics from each of these two schools. In particular, our method determines gene candidates among the ORFs that can be identified in a given DNA strand through the use of the Bio-Dictionary, a database of patterns that covers essentially all of the currently available sample of the natural protein sequence space. Our approach relies entirely on the use of redundant patterns as the agents on which the presence or absence of genes is predicated and does not employ any additional evidence, e.g. ribosome-binding site signals. The Bio-Dictionary Gene Finder (BDGF), the algorithm's implementation, is a single computational engine able to handle the gene identification task across distinct archaeal and bacterial genomes. The engine exhibits performance that is characterized by simultaneous very high values of sensitivity and specificity, and a high percentage of correctly predicted start sites. Using a collection of patterns derived from an old (June 2000) release of the Swiss-Prot/TrEMBL database that contained 451 602 proteins and fragments, we demonstrate our method's generality and capabilities through an extensive analysis of 17 complete archaeal and bacterial genomes. Examples of previously unreported genes are also shown and discussed in detail.

  2. Moving force identification based on redundant concatenated dictionary and weighted l1-norm regularization

    NASA Astrophysics Data System (ADS)

    Pan, Chu-Dong; Yu, Ling; Liu, Huan-Lin; Chen, Ze-Peng; Luo, Wen-Feng

    2018-01-01

    Moving force identification (MFI) is an important inverse problem in the field of bridge structural health monitoring (SHM). Reasonable signal structures of moving forces are rarely considered in the existing MFI methods. Interaction forces are complex because they contain both slowly-varying harmonic and impact signals due to bridge vibration and bumps on a bridge deck, respectively. Therefore, the interaction forces are usually hard to be expressed completely and sparsely by using a single basis function set. Based on the redundant concatenated dictionary and weighted l1-norm regularization method, a hybrid method is proposed for MFI in this study. The redundant dictionary consists of both trigonometric functions and rectangular functions used for matching the harmonic and impact signal features of unknown moving forces. The weighted l1-norm regularization method is introduced for formulation of MFI equation, so that the signal features of moving forces can be accurately extracted. The fast iterative shrinkage-thresholding algorithm (FISTA) is used for solving the MFI problem. The optimal regularization parameter is appropriately chosen by the Bayesian information criterion (BIC) method. In order to assess the accuracy and the feasibility of the proposed method, a simply-supported beam bridge subjected to a moving force is taken as an example for numerical simulations. Finally, a series of experimental studies on MFI of a steel beam are performed in laboratory. Both numerical and experimental results show that the proposed method can accurately identify the moving forces with a strong robustness, and it has a better performance than the Tikhonov regularization method. Some related issues are discussed as well.

  3. The chemical component dictionary: complete descriptions of constituent molecules in experimentally determined 3D macromolecules in the Protein Data Bank

    PubMed Central

    Westbrook, John D.; Shao, Chenghua; Feng, Zukang; Zhuravleva, Marina; Velankar, Sameer; Young, Jasmine

    2015-01-01

    Summary: The Chemical Component Dictionary (CCD) is a chemical reference data resource that describes all residue and small molecule components found in Protein Data Bank (PDB) entries. The CCD contains detailed chemical descriptions for standard and modified amino acids/nucleotides, small molecule ligands and solvent molecules. Each chemical definition includes descriptions of chemical properties such as stereochemical assignments, chemical descriptors, systematic chemical names and idealized coordinates. The content, preparation, validation and distribution of this CCD chemical reference dataset are described. Availability and implementation: The CCD is updated regularly in conjunction with the scheduled weekly release of new PDB structure data. The CCD and amino acid variant reference datasets are hosted in the public PDB ftp repository at ftp://ftp.wwpdb.org/pub/pdb/data/monomers/components.cif.gz, ftp://ftp.wwpdb.org/pub/pdb/data/monomers/aa-variants-v1.cif.gz, and its mirror sites, and can be accessed from http://wwpdb.org. Contact: jwest@rcsb.rutgers.edu. Supplementary information: Supplementary data are available at Bioinformatics online. PMID:25540181

  4. Combining the Benefits of Electronic and Online Dictionaries with CALL Web Sites to Produce Effective and Enjoyable Vocabulary and Language Learning Lessons

    ERIC Educational Resources Information Center

    Loucky, John Paul

    2005-01-01

    To more thoroughly analyze and compare the types of dictionaries being used by Japanese college students in three college engineering classes, two kinds of surveys were designed. The first was a general survey about purchase, use and preferences regarding electronic dictionaries. The second survey asked questions about how various computerised…

  5. Disruptive Innovation: Value-Based Health Plans

    PubMed Central

    Vogenberg, F. Randy

    2008-01-01

    Value and a Complex Healthcare Market What Is Value to an Employer? “Worth in usefulness or importance to the possessor; utility or merit.” American Heritage Dictionary “A principle, standard, or quality considered worthwhile or desirable.” American Heritage Stedman's Medical Dictionary “A fair return or equivalent in goods, services, or money for something exchanged.” Merriam-Webster's Dictionary of Law PMID:25128808

  6. The Use of E-Dictionary to Read E-Text by Intermediate and Advanced Learners of Chinese

    ERIC Educational Resources Information Center

    Wang, Jing

    2012-01-01

    This study focuses on the pedagogical outcomes connected with the use of an e-dictionary by intermediate and advanced learners of Chinese to aid in reading an expository Chinese e-text. Twenty intermediate and advanced participants read an e-text twice aided by an e-dictionary and wrote recalls of the text in English. In addition to low frequency…

  7. The Use of Electronic Dictionary in the Language Classroom: The Views of Language Learners

    ERIC Educational Resources Information Center

    Barham, Kefah A.

    2017-01-01

    E- Dictionaries have the potential to be a useful instrument in English Language classes, at the same time; it can be seen as a waste of time and a hindrance tool in the English Language classroom. This paper reports on students' use of e-dictionary in two of "Educational Readings in the English Language" course sections through in-depth…

  8. JPRS Report, Soviet Union, Political Affairs, Republic Language Legislation.

    DTIC Science & Technology

    1989-12-05

    reference materials ( dictionaries , termi- nology glossaries, phrase books, self-taught books, and so on), and qualified specialists in the field of...textbooks; d) to publish self-taught manuals, phrase books, and explanatory and bilingual dictionaries for the aid of persons desiring to study...Armenian. To create the nec- essary printing facility base to publish high-quality illus- trated dictionaries ; to provide uninterrupted delivery of

  9. Domain Adaptation of Translation Models for Multilingual Applications

    DTIC Science & Technology

    2009-04-01

    expansion effect that corpus (or dictionary ) based trans- lation introduces - however, this effect is maintained even with monolingual query expansion [12...every day; bilingual web pages are harvested as parallel corpora as the quantity of non-English data on the web increases; online dictionaries of...approach is to customize translation models to a domain, by automatically selecting the resources ( dictionaries , parallel corpora) that are best for

  10. Creating a Chinese suicide dictionary for identifying suicide risk on social media.

    PubMed

    Lv, Meizhen; Li, Ang; Liu, Tianli; Zhu, Tingshao

    2015-01-01

    Introduction. Suicide has become a serious worldwide epidemic. Early detection of individual suicide risk in population is important for reducing suicide rates. Traditional methods are ineffective in identifying suicide risk in time, suggesting a need for novel techniques. This paper proposes to detect suicide risk on social media using a Chinese suicide dictionary. Methods. To build the Chinese suicide dictionary, eight researchers were recruited to select initial words from 4,653 posts published on Sina Weibo (the largest social media service provider in China) and two Chinese sentiment dictionaries (HowNet and NTUSD). Then, another three researchers were recruited to filter out irrelevant words. Finally, remaining words were further expanded using a corpus-based method. After building the Chinese suicide dictionary, we tested its performance in identifying suicide risk on Weibo. First, we made a comparison of the performance in both detecting suicidal expression in Weibo posts and evaluating individual levels of suicide risk between the dictionary-based identifications and the expert ratings. Second, to differentiate between individuals with high and non-high scores on self-rating measure of suicide risk (Suicidal Possibility Scale, SPS), we built Support Vector Machines (SVM) models on the Chinese suicide dictionary and the Simplified Chinese Linguistic Inquiry and Word Count (SCLIWC) program, respectively. After that, we made a comparison of the classification performance between two types of SVM models. Results and Discussion. Dictionary-based identifications were significantly correlated with expert ratings in terms of both detecting suicidal expression (r = 0.507) and evaluating individual suicide risk (r = 0.455). For the differentiation between individuals with high and non-high scores on SPS, the Chinese suicide dictionary (t1: F 1 = 0.48; t2: F 1 = 0.56) produced a more accurate identification than SCLIWC (t1: F 1 = 0.41; t2: F 1 = 0.48) on different observation windows. Conclusions. This paper confirms that, using social media, it is possible to implement real-time monitoring individual suicide risk in population. Results of this study may be useful to improve Chinese suicide prevention programs and may be insightful for other countries.

  11. Creating a Chinese suicide dictionary for identifying suicide risk on social media

    PubMed Central

    Liu, Tianli

    2015-01-01

    Introduction. Suicide has become a serious worldwide epidemic. Early detection of individual suicide risk in population is important for reducing suicide rates. Traditional methods are ineffective in identifying suicide risk in time, suggesting a need for novel techniques. This paper proposes to detect suicide risk on social media using a Chinese suicide dictionary. Methods. To build the Chinese suicide dictionary, eight researchers were recruited to select initial words from 4,653 posts published on Sina Weibo (the largest social media service provider in China) and two Chinese sentiment dictionaries (HowNet and NTUSD). Then, another three researchers were recruited to filter out irrelevant words. Finally, remaining words were further expanded using a corpus-based method. After building the Chinese suicide dictionary, we tested its performance in identifying suicide risk on Weibo. First, we made a comparison of the performance in both detecting suicidal expression in Weibo posts and evaluating individual levels of suicide risk between the dictionary-based identifications and the expert ratings. Second, to differentiate between individuals with high and non-high scores on self-rating measure of suicide risk (Suicidal Possibility Scale, SPS), we built Support Vector Machines (SVM) models on the Chinese suicide dictionary and the Simplified Chinese Linguistic Inquiry and Word Count (SCLIWC) program, respectively. After that, we made a comparison of the classification performance between two types of SVM models. Results and Discussion. Dictionary-based identifications were significantly correlated with expert ratings in terms of both detecting suicidal expression (r = 0.507) and evaluating individual suicide risk (r = 0.455). For the differentiation between individuals with high and non-high scores on SPS, the Chinese suicide dictionary (t1: F1 = 0.48; t2: F1 = 0.56) produced a more accurate identification than SCLIWC (t1: F1 = 0.41; t2: F1 = 0.48) on different observation windows. Conclusions. This paper confirms that, using social media, it is possible to implement real-time monitoring individual suicide risk in population. Results of this study may be useful to improve Chinese suicide prevention programs and may be insightful for other countries. PMID:26713232

  12. Conversion of ethanol to 1,3-butadiene over Na doped ZnxZryOz mixed metal oxides

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baylon, Rebecca A.; Sun, Junming; Wang, Yong

    2016-01-01

    Despite numerous studies on different oxide catalysts for the ethanol to 1,3-butadiene reaction, few have identified active sites (i.e., type of acidity) correlated to the catalytic performances. In this work, the type of acidity needed for ethanol to 1,3-butadiene conversion has been studied over Zn/Zr mixed oxide catalysts. Specifically, synthesis method, Zn/Zr ratio, and Na doping have been used to control the surface acid-base properties, as confirmed by characterizations such as NH3-TPD and IR-Py techniques. The 2000 ppm Na doped Zn1Zr10Oz-H with balanced base and weak Bronsted acid sites was found to give not only high selectivity to 1,3-butadiene (47%)more » at near complete ethanol conversion (97%), but also exhibited a much higher 1,3-butadiene productivity than other mixed oxides studied.« less

  13. Handbook of Entry Level Jobs. A Guide for Occupational Investigation for Administrators, Counselors, Vocational and Special Education Teachers.

    ERIC Educational Resources Information Center

    McCarron, Lawrence T.

    This handbook is intended to provide administrators, vocational counselors, and teachers with a convenient reference of entry-level jobs. The handbook organizes information on over 3,000 jobs into the nine occupational clusters that have been identified by the Department of Labor in the Dictionary of Occupational Titles (DOT). Jobs are organized…

  14. Computerized Archive and Dictionary of the Jaqimara Languages of South America.

    ERIC Educational Resources Information Center

    Hardman-de-Bautista, M. J.

    The three extant members of the Jaqi (Jaqimara) family, Aymara, Jaqaru and Kawki, are spoken by over one million people primarily in Peru and Bolivia, but earlier members of the Jaqimara family were probably spoken throughout the whole area of present-day Peru. This paper gives an outline of some of the salient structural features of these…

  15. Development Of International Data Standards For The COSMOS/PEER-LL Virtual Data Center

    NASA Astrophysics Data System (ADS)

    Swift, J. N.

    2005-12-01

    The COSMOS -PEER Lifelines Project 2L02 completed a Pilot Geotechnical Virtual Data Center (GVDC) system capable of both archiving geotechnical data and of disseminating data from multiple linked geotechnical databases. The Pilot GVDC system links geotechnical databases of four organizations: the California Geological Survey, Caltrans, PG&E, and the U. S. Geological Survey The System was presented and reviewed in the COSMOS-PEER Lifelines workshop on June 21 - 23, 2004, which was co-sponsored by the Federal Highway Administration (FHWA) and included participation by the United Kingdom Highways Agency (UKHA) , the Association of Geotechnical and Geoenvironmental Specialists in the United Kingdom (AGS), the United States Army Corp of Engineers (USACOE), Caltrans, United States Geological Survey (USGS), California Geological Survey (CGS), a number of state Departments of Transportation (DOTs), county building code officials, and representatives of academic institutions and private sector geotechnical companies. As of February 2005 COSMOS-PEER Lifelines Project 2L03 is currently funded to accomplish the following tasks: 1) expand the Pilot GVDC Geotechnical Data Dictionary and XML Schema to include data definitions and structures to describe in-situ measurements such as shear wave velocity profiles, and additional laboratory geotechnical test types; 2) participate in an international cooperative working group developing a single geotechnical data exchange standard that has broad international acceptance; and 3) upgrade the GVDC system to support corresponding exchange standard data dictionary and schema improvements. The new geophysical data structures being developed will include PS-logs, downhole geophysical logs, cross-hole velocity data, and velocity profiles derived using surface waves. A COSMOS-PEER Lifelines Geophysical Data Dictionary Working Committee constituted of experts in the development of data dictionary standards and experts in the specific data to be captured are presently working on this task. The international geotechnical data dictionary and schema development is a highly collaborative effort funded by a pooled fund study coordinated by state DOTs and FHWA. The technical development of the standards called DIGGS (Data Interchange for Geotechnical and Geoenvironmental Specialists) is lead by a team consisting of representatives from the University of Florida, Department of Civil Engineering (UF), AGS, Construction Industry Research and Information Association (CIRIA), UKHA, Ohio DOT, and COSMOS. The first draft of DIGGS is currently in preparation. A Geotechnical Management System Group (GMS group), composed of representatives from 13 State DOTs, FHWA, US EPA, USACOE, USGS and UKHA, oversees and approves the development of the standards. The ultimate goal of both COSMOS-PEER Lifelines Project 2L03 and the international GMS working group is to produce open and flexible, GML-compliant XML schema-based data structures and data dictionaries for review and approval by DOTs, other public agencies, and the international engineering and geoenvironmental community at large, leading to adoption of internationally accepted geotechnical and geophysical data transfer standards. Establishment of these standards is intended to significantly facilitate the accessibility and exchange of geotechnical information world wide.

  16. Cross-Modality Image Synthesis via Weakly Coupled and Geometry Co-Regularized Joint Dictionary Learning.

    PubMed

    Huang, Yawen; Shao, Ling; Frangi, Alejandro F

    2018-03-01

    Multi-modality medical imaging is increasingly used for comprehensive assessment of complex diseases in either diagnostic examinations or as part of medical research trials. Different imaging modalities provide complementary information about living tissues. However, multi-modal examinations are not always possible due to adversary factors, such as patient discomfort, increased cost, prolonged scanning time, and scanner unavailability. In additionally, in large imaging studies, incomplete records are not uncommon owing to image artifacts, data corruption or data loss, which compromise the potential of multi-modal acquisitions. In this paper, we propose a weakly coupled and geometry co-regularized joint dictionary learning method to address the problem of cross-modality synthesis while considering the fact that collecting the large amounts of training data is often impractical. Our learning stage requires only a few registered multi-modality image pairs as training data. To employ both paired images and a large set of unpaired data, a cross-modality image matching criterion is proposed. Then, we propose a unified model by integrating such a criterion into the joint dictionary learning and the observed common feature space for associating cross-modality data for the purpose of synthesis. Furthermore, two regularization terms are added to construct robust sparse representations. Our experimental results demonstrate superior performance of the proposed model over state-of-the-art methods.

  17. Oxford dictionary of Physics

    NASA Astrophysics Data System (ADS)

    Isaacs, Alan

    The dictionary is derived from the Concise Science Dictionary, first published by Oxford University Press in 1984 (third edition, 1996). It consists of all the entries relating to physics in that dictionary, together with some of those entries relating to astronomy that are required for an understanding of astrophysics and many entries that relate to physical chemistry. It also contains a selection of the words used in mathematics that are relevant to physics, as well as the key words in metal science, computing, and electronics. For this third edition a number of words from quantum field physics and statistical mechanics have been added. Cosmology and particle physics have been updated and a number of general entries have been expanded.

  18. Tensor-Dictionary Learning with Deep Kruskal-Factor Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stevens, Andrew J.; Pu, Yunchen; Sun, Yannan

    We introduce new dictionary learning methods for tensor-variate data of any order. We represent each data item as a sum of Kruskal decomposed dictionary atoms within the framework of beta-process factor analysis (BPFA). Our model is nonparametric and can infer the tensor-rank of each dictionary atom. This Kruskal-Factor Analysis (KFA) is a natural generalization of BPFA. We also extend KFA to a deep convolutional setting and develop online learning methods. We test our approach on image processing and classification tasks achieving state of the art results for 2D & 3D inpainting and Caltech 101. The experiments also show that atom-rankmore » impacts both overcompleteness and sparsity.« less

  19. Modeling the diurnal cycle of carbon monoxide: Sensitivity to physics, chemistry, biology, and optics

    NASA Astrophysics Data System (ADS)

    Gnanadesikan, Anand

    1996-05-01

    As carbon monoxide within the oceanic surface layer is produced by solar radiation, diluted by mixing, consumed by biota, and outgassed to the atmosphere, it exhibits a diurnal cycle. The effect of dilution and mixing on this cycle is examined using a simple model for production and consumption, coupled to three different mixed layer models. The magnitude and timing of the peak concentration, the magnitude of the average concentration, and the air-sea flux are considered. The models are run through a range of heating and wind stress and compared to experimental data reported by Kettle [1994]. The key to the dynamics is the relative size of four length scales; Dmix, the depth to which mixing occurs over the consumption time; L, the length scale over which production occurs; Lout, the depth to which the mixed layer is ventilated over the consumption time; and Lcomp, the depth to which the diurnal production can maintain a concentration in equilibrium with the atmosphere. If Dmix ≫ L, the actual model parameterization can be important. If the mixed layer is maintained by turbulent diffusion, Dmix can be substantially less than the mixed layer depth. If the mixed layer is parameterized as a homogeneous slab, Dmix is equivalent to the mixed layer depth. If Dmix > Lout, production is balanced by consumption rather than outgassing. The ratio between Dmix and Lcomp determines whether the ocean is a source or a sink for CO. The main thermocline depth H sets an upper limit for Dmix and hence Dmix/L, Dmix/Lout, and Dmix/Lcomp. The models are run to simulate a single day of observations. The mixing parameterization is shown to be very important, with a model which mixes using small-scale diffusion, producing markedly larger surface concentrations than models which homogenize the mixed layer completely and instantaneously.

  20. n-Gram-Based Text Compression.

    PubMed

    Nguyen, Vu H; Nguyen, Hien T; Duong, Hieu N; Snasel, Vaclav

    2016-01-01

    We propose an efficient method for compressing Vietnamese text using n -gram dictionaries. It has a significant compression ratio in comparison with those of state-of-the-art methods on the same dataset. Given a text, first, the proposed method splits it into n -grams and then encodes them based on n -gram dictionaries. In the encoding phase, we use a sliding window with a size that ranges from bigram to five grams to obtain the best encoding stream. Each n -gram is encoded by two to four bytes accordingly based on its corresponding n -gram dictionary. We collected 2.5 GB text corpus from some Vietnamese news agencies to build n -gram dictionaries from unigram to five grams and achieve dictionaries with a size of 12 GB in total. In order to evaluate our method, we collected a testing set of 10 different text files with different sizes. The experimental results indicate that our method achieves compression ratio around 90% and outperforms state-of-the-art methods.

Top